Tag Archives: tools

Facebook network

A sociogram of my Facebook networkI am currently trying to catch up on the Coursera MOOC on social network analysis . My main aim in taking the course is to force myself to learn about using Gephi for network analysis. The course so far has been clear and well presented but its early stages. Also, using Gephi on the Mavericks version of OSX has been a pain largely due to Java as Gephi won’t run on the default install of Java. The solution can be found on the Gelphi forums here although I’m still having some problems with Java.

I don’t use Facebook much and was a bit surprised at the density of the network as a whole but having that number of sub-clusters was less surprising considering the stop-start nature of how the network developed. I’ll have to find out who the single unconnected nodes are once the Java issues have been resolved.

Social Network Analysis and Digital Data Analysis

Notes on a presentation by Pablo Paredes. The abstract for the seminar is:

This presentation will be about how to make social network analysis from social media services such as Facebook and Twitter. Although traditional SNA packages are able to analyse data from any source, the volume of data from these new services can make convenient the use of additional technologies. The case in the presentation will be about a study of the degrees of distance on Twitter, considering different steps as making use of streaming API, filtering and computing results.

The presentation is drawn from the paper: Fabrega, J. Paredes, P. (2013) Social Contagion and Cascade behaviours on Twitter. Information 4/2: 171-181.

These are my brief and partial notes on the seminar taken live (so “typos ahead!”).

Looking at gathering data from social network sites and on a research project on contagion in digital data.

Data access requires knowledge of the APIs for each platform but Apigee details the APIs of most social networks (although as an intermediary, this may lead to further issues in interfacing different software tools, e.g., Python tool kits may assist in accessing APIs directly rather than through Apigee). In their research, Twitter data was extracted using Python tools such as Tweepy (calls to Twitter) and NetworkX (a Python library for SNA) along with additional libraries including Apigee. These tools allow the investigation of different forms of SNA beyond ego-centric analysis.

Pablo presented a network diagram from Twitter using NodeXL as ego-networks but direct access to Twitter API would give more options in alternative network analysis . Diffusion of information on Twitter was not possible on NodeXL.

Used three degrees of influence theory from Christakes & Fowler 2008. Social influence diffuses to three degrees but not beyond due to noisy communication and technology/ time issues leading to information decay. For example, most RTs take place within 48 hrs so tends not to extend beyond a friends, friends friend! This relates to network instability and loss of interest from users beyond three degrees alongside increasing information competition as too intense beyond three degrees to diffusion decomposes.

The  direct research found a 3-5% RT rate in diffusion of a single Tweet. RT rates were higher with the use of a hashtag and correlate to the number of followers of the originator but negatively correlates to @_mentions in the original Tweet. This is possibly as a result of @_mentions being seen as a private conversations. Overall, less than 1% of RTs went beyond three degrees.

Conclusion is that diffusion in digital networks is similar to that found in physical networks which implies that there are human barriers to communication in online spaces. But the research is limited due to the limits on access to Twitter API as well as privacy policies on Twitter API. Replicability becomes very difficult as a result and this issue is compounded as API versions change and so software libraries and tools no longer work or no longer work in the same way. Worth noting that there is no way of knowing how Twitter samples the 1% of Tweets provided through the API. Therefore, there is a need to access 100% of the Twitter data to provide a clear baseline for understanding Twitter samples and justify the network boundaries.

Points to importance that were writing code using R/ Python preferable as easier to learn and with larger support communities.

What do you use to collate your learning?

I’ve been looking at ePortfolios recently. Partly as a project I’m working on uses an ePortfolio for technician language learners  – which I’ve written more on here – but also I’ve become more interested in the importance of curation in e-learning. But ePortfolios just never really seemed to work for me. As Martin Weber stated:

They [eportfolios] can be seen as a case study of how educational technology gets it wrong…Blogs are good enough for eportfolios, if what you want from an eportfolio is for people to actually, you know, use them.

But this post from Jane Hart grabbed my attention and in particular, the statement:

The organization will keep a record of your training activity in their LMS noting courses taken, course completions, results of quizzes, etc – and if the LMS is one of the newer social versions this will probably also even record your social activity, ie the number of posts you have made or comments you have added. However, this training record  belongs to the company as THEY own your learning, and even if they were happy to share it with you, this “activity record” generated automatically from their systems doesn’t actually show what impact your training has had on your productivity or performance.

She goes on to state that ePortfolios are :

…. about recognizing that most of your real learning takes place continuously  – and frequently unintentionally – in many other ways e.g.

  • in your daily dealings  with your  colleagues, customers, clients or friends
  • by being active in the fast moving flow of ideas and new resources being exchanged in your professional networks
  • by keeping up to date with what’s happening in your industry or profession through a constant stream of industry news

But this is more than narrating your work but also reviewing, collating and reflecting on your work-in-learning and learning-in-work.

But … I still can’t engage with bespoke ePortfolio tools: they still seem just ‘wrong’ yet I also know I’m not engaging here in a way that values my learning. I do wonder whether actually, I’m not acknowledging the importance of my learning-in-work to capture ist systematically and rigorously as part of my day-to-day work?

Tools of e-learning

I noticed this post from the Centre for Learning and Performance Technologies [CL4PT] on ten key tools for learning. There’s a very clear triangle forming of course/ content “authorware” [eg, screenr or prezi], collaboration tools [eg, etherpad or dimdim] and individual tools [eg, evernote or arguable posterour].

This highlighted a question would be how these might work together? But also what it might mean for the L&D department that focuses on courseware suitable for routine learning for routinised work as opposed to collaboration and reflection that is potentially more focused on creativity, innovation and expansive learning?