Category Archives: technology

Personal learning environments

Network ALL2_BC
I’m currently writing up some ideas on open online professional learning that includes considering  personal learning networks. I came across this interesting post from Martin Weller on the apparent decline in interest or discussion of personal learning networks. The reasons suggested include the mainstreaming of the practices associated with PLEs, a consolidation of the tools used in to a fairly generic set of software used but also that the (research) agenda has shifted from personal learning to institutionally provided personalised learning partly driven by learning analytics.

 

MOOCs automation, artificial intelligence and educational agents

Geoge Veletsianos is speaking at a seminar hosted by DiCE research group at University of Edinburgh. The hastag for the event is #edindice and the subject is MOOCs, automation and artificial intelligence.

[These notes were taken live and I’ve retained the typos, poor syntax and grammer etc… some may call that ‘authentic’!]
 
George began by stating that this is an opportune time for the discussion as MOOCs in the media, the developments on the Turing Test and MIT media lab story telling bots used for second language skills in early years or google’s self-driving cars. Bringing together notion of AI, intelligent being ets.
Three main topics: (1) MOOCs as sociocultural phenomenon; (2) autonomation of teaching and (3) pedagogical agents and the automation of teaching.

MOOCs: first experienced these in 2011 and Change11 as a facilitator and uses them as object of study for his PG teaching and in research. Mainly participated as observor/ drop out.

MOOCs may be a understood as courses of learning but also sociocultural phenomena in response to the perceived failure of higher education. In particular, MOOCs can be seen as a response to the rising costs of higher education in North America and as a symptom of the vocationalisation of higher education. Worplace training drives much of the discussion on MOOCs as illustrated by Udacity changing from higher ed to training provider and introducing the notion of the nano-degree linked to employability. Also changes in the political landscape and cuts to state funding of HEIs in the USA and the discourse of public sector ineffieciencies and solutions based on competition and diversity of provision being prefered. MOOCs also represent the idea of technology as a solution to issues in education such as cost, student engagaement  and MOOCs as indicative of scholarly failure. Disciplines and knowledge of education such as learning sciences not available many as knowledge locked-in to costly journals, couched in obscure language. MOOCs also represent the idea that education can be packaged and automated at scale. Technologies seen as solutions ot providing education at scale, including TV, radio and recording lectures etc. so education is seen as content delivery. 
Also highlighted that xMOOCs came out of comp sci rather than education schools and driven by rubics of efficiency and autonomation. 
Pressey 1933 called for an industrial revoluation of education through the use of teaching machines that provide information, allow the learner to respond and provide feedback on that learner response. B.F. Skinner also created a teaching machine in 1935 based on stimulous/ response of lights indicating whether a response is correct or not. 
Similarly MOOCs adopt similar discourses on machine learning around liberating teachers from administration and grading to be able to spend more time teaching. So these arguments are part of a developed narrative of efficiency in education.But others have warned against the trend towards commodification of education (Noble 1988) but this commodification can be seen in the adoption of LMS and “shovelware” (information masquarading as a course).
Automation is increasing encrouching in to academia via eg, reference management software, Google scholar alerts, TOC alerts from journals, social media automation, RSS feeds, content aggregators (Feedly, Netvibes) and programming of the web through, for example, If This Then That (IFTTT). 
As a case, looks at the Mechanical MOOC that are based on assumptions that high quality open learning resources can be assembled, that learners can automatically come together to learn and can be assessed without human involvement and so the MOOC can be automated. An email schedular coordinates the learning, OpenStudy is used for peer support and interactive coding is automatically assessed through CodeAcademy. So attracts strongly and self-directed and capable learners. But research incates the place and visibility of teachers remains important (Ross & Bayne 2014). 
Moving on to educational agents as avatars that present and possibly respond to learners. These tend to be similar to virtual assistants. Such agents assist in learning, motivation, engagement, play and fun but the evidence to support these claims is ambiguous and often “strange”. In the research, gender, race, design and functions all interact and learners respond often based on the stereotypes used in human interactions. The most appealing agent tending to have a more positive effect on learning. Also context mediates perceptions and so how pedagogical agents are perceived and understood. 
The relationship between the agents and learners and their interactions is the subject of a number of studies on topics of discussion and social practices. Found that students and agents engage in small-talk and playfulness even though they are aware they are interacting with an arteficial agent. Also saw aggressive interactions from the learners, especially if the expert-agent is unable to answer a query. Students also shared personal information with the agents. Agents were positioned in to different roles as a learner companion, as a mediator between academic staff and learner, as a partner.
So social and psychological issues are as important as technology design issues. So do we need a Turing test for MOOC instruction? How we design technologies reflect as well as shape our cultures. 
//Ends with Q&A discussion

LinkedIn network map

This is a very short post on my LinkedIn network mapLinkedIn network map

The identification of three distinct clusters of contacts is interesting and (kind of) makes sense. What is particularly useful is identifying the links between clusters that ‘should’ be stronger. In terms of developing a professional personal learning network as part of a personal learning environment, LinkIn maps  look  useful as a visual “sense-making” tool and for identifying your network’s strengths and weaknesses. Next is to attempt to work out why some components of my networks look weak, if these weaker areas can and should be strengthened and, if so, how?

Google Alert for the Soul

See on Scoop.itNetwork learning

Peter Evans‘s insight:

An interesting but unconvincing argument presented here. The corrupting influence of consumerism on authenticity appears to be to be based on a ‘straw man’ argument accepting identity as individual. The corruption is due to the colonisation of self-actualisation by consumerism. Yet, arguably, the idea of an authentic individual (internal) identity has always been problematic.

Secondly, the argument that individual identity is being transformed by social media to a socialised and computationalised (and networked?) identity appears to rely on technological determinism. Social media has not made “Authenticity as fidelity to an autonomous, unified a priori self” untenable. It was always untenable as humans are inherently social animals. Furthermore, the idea that the quantified self is a way of locating an authentic self seems distinctly flawed and would benefit from a more critical analysis of the ‘computational turn’ in the social sciences. Ben Williamson’s notion of the ‘data doppleganger” seems more appropriate here (http://bit.ly/1lwXlIC)

 

See on thenewinquiry.com

Digital Scholarship day of ideas: data [2]

This is the second session of the day I wanted to note in detailed (the first is here). The session it Robert Procter on Big Data and the sociological imagination, Professor of social informatics at the University of Warwick. These notes are written live from the live stream. So here we go:

The title has changed to Big Data and the Co-Production of Social Scientific Knowledge. The talk will explain a bit more on social informatics as a hybrid computer scientist and sociologist; the meaning of ‘big data’ and how academic sociology can use such data including the development of new tools and methods of inquiry – see COSMOS – and concluding with remarks how these elements may combine in an exciting understanding of how social science and technology may emerge through different stakeholders including crow-sourced approach.

Social informatics is inter-disciplinary study of factors that shape adoption of ICT and the social shaping of technology. Processes of innovation involving districted technologies are large in scale and involve diverse range of publics such as understanding social media as processes of large-scale social learning. Asking how social media works and how people can use it to further their aims. As it is public and involves social media makes it easier in many ways to see what is going on as the technology makes much of the data available (although its not entirely straightforward).

Social media is Rob’s primary area of interest. Recent research includes on the use of social media in scholarly communications to put research in the public domain. But the value of this is not entirely clear. Identified positive and negative view points. The research also looked at how academic publishers were responding to such changes in scholarly communications such as supporting the use of social media as well as developing tools to trace and aggregate the use of research data. This showed mixed results.

Another research project was on the use of Twitter use during the 2012 riots in England in conjunction with The Guardian. In particular, was social media important in spreading false information during such events. So the research looked at particular rumours identified in the corpus of Tweets. So how do rumours emerge in social media and how do people behave and respond to such rumours?

This leads to the question of how to analyse 2.5m Tweets which is qualitative data. Research needs to seek out structures and patterns to focus scarce human resources for closer analysis of the Tweets.

Savage and Burrows (2007) on empirical sociology arguing that the best sociology is being done by the commercial sector as they have access to data. Academic sociology becoming irrelevant. However, newer sources of data that provides for enhanced relevance of academic sociology and this is reinforced by the rise of open data initiatives. So we can feel more confident on the future of academic sociology.

But how this data is being used raises further issues such as linking mood in social media with stock market movements but this confuses correlation and causation. Other analysis has focused on challenges to dictatorial regimes and the promotion of democracy and political changes and for social movements to self-organise. Methodological challenges are concerned with dealing with the volume of data so combining computation tools with sociological sensitivity and understanding of the world. But many sociologists are wary of the ‘computational turn’.

Returning to the England riots looking at the rumour of rioters attacking a children’s hospital in Birmingham. This involves an interpretive piece of work focused on data that may provide useful and interesting results. So the rumour started with people reporting police congregating at the hospital and so people inferred that the hospital was under threat. The computational component was to discover a useful structure in the data using sentiment and topic analysis – divided Tweets into original and retweets that combine in to an information flows and some flows are bigger than others. Taking size of the information flow as an indicator of significance can provide an indication for where to focus the analysis. Used coding frames to capture the relevant ways people were responding to the information including accepting and challenging Tweets. This coding was used to visualise how information flows through Twitter. The rumour was initially framed as a possibility but mushroomed and different threads of the rumour emerged. The rumour initially spreads without challenge but later people began to Tweet alternative explanations for the police being her the hospital i.e., a police station is next to the hospital. So rumours do not go unchallenged and people apply common-sense reasoning to rumours. While rumours grow quickly in social media but the crowd sourcing effects of social media help in establishing what the likely truth is. This could be further enhanced through engagement from trusted sources such as news organisations or the police? This could be augmented by computational work to help address such rumour flows (see Pheme).

There is also the question of what the police were doing on Twitter at the time. In Manchester, accounts were created to disseminate what was happening and draw attention to events to the police so acting to inform public services.

This research indicates innovation as a co-production. People collective experimenting and discovering the limitations and benefits of social media. Uses of social media are emergent and shaped through exploration.

On to the development of tools for sociologists to analyse ‘big’ social data including COSMOS to help interrogate large social media data. This also involves linking social media data with other data sets [and so links to the open data]. So COSMOS assists in forging interdisciplinary working between sociologists and computing scientists, provide interoperable analysis tools and evolve capabilities for analysis. In particular, points to the issues of the black-boxing of computational analysis and COSMOS aims to make the computational processes as transparent as possible.
COSMOS tools include text analysis and social network analysis linked to other data sets. A couple of experimental tools are being developed on geolocation and on topic identification and clustering around related words. COSMOS research looking at social media and civil society; hate speech and social media, citizen science, crime sensing; suicide clusters and social media; and the BBC and tweeting the olympics. Points to an educational need for people to understand the public nature of social media especially in relation to hate speech.

Social media as digital agora, on the role of social media in developing civil society and social resilience through sharing information, holding institutions to account, inter-subjective sense-making, cohesion and so forth.

Sociology beyond the academy and the co-production of scientific knowledge. Points to examples such as the Channel 4 fact checker as an example of wider data awareness and understanding and citizen journalism mobilises people to document and disseminate what is going on in the world. Also gives the example of sousveillance of the police as a counter to the rise of the surveillance state. The Guardian’s use of volunteers to analyse MP expenses. So ‘the crowd’ is involved in social science through collecting and analysing data and so sociology is spanning the academy and so boundaries of the academy are becoming more porous. These developments create an opportunity to realise a ‘public sociology’ (Burawoy 2005) but this requires greater facilitation from the academy through engaging with diverse stakeholders, provision of tools, new forms of scholarly communication, training and capacity building and developing more open dialogues on research problems. Points to public lab and hackathons as means for people to engage with and do (social) science themselves.

Twitter “ain’t all that”

A useful reminder that Twitter Should Not Be Your Only Communications Channel in organising and promoting events for the following reasons:

Firstly, not everyone is on Twitter. Wouldn’t have thought this point needed made, but apparently it does.

Secondly, not everyone follows the right people on Twitter. This applies doubly if you are organising an event and tweet details from your personal account and not some kind of event account. How vain are you to assume that everyone who matters to your event follows your personal account?

Thirdly, even if they do it’s very easy to miss a Tweet. If you don’t check Twitter regularly it’s easy to miss old Tweets, especially as they show new Tweets first.

And even if you do see a Tweet going past containing a fact you need to remember it’s to easy for it to slip past without you having recorded it, and next time you try to look for it it’s almost impossible to find. (eg “Where is tonight’s event? I know someone tweeted it last week but now I can’t find it!”)

Now we get on the two way communication part. Again, there’s nothing wrong with doing this – the problem comes if you only do this and make no other communication channels open.

Firstly, it’s 140 characters. You can’t discuss any details, or any points of finesse, or a complex situation. You just can’t. Communication is superficial.

Secondly, almost all communication is public and many people aren’t happy with that. Maybe the nature of their comment means they want to discuss it in private?

And lastly, remember that for large segments of the population, Twitter is not a safe spaceNot in the slightestReally not. If someone does not feel comfortable using Twitter, are you happy excluding them from your communications, remembering that they may already feel excluded from many other things already?

Social Network Analysis and Digital Data Analysis

Notes on a presentation by Pablo Paredes. The abstract for the seminar is:

This presentation will be about how to make social network analysis from social media services such as Facebook and Twitter. Although traditional SNA packages are able to analyse data from any source, the volume of data from these new services can make convenient the use of additional technologies. The case in the presentation will be about a study of the degrees of distance on Twitter, considering different steps as making use of streaming API, filtering and computing results.

The presentation is drawn from the paper: Fabrega, J. Paredes, P. (2013) Social Contagion and Cascade behaviours on Twitter. Information 4/2: 171-181.

These are my brief and partial notes on the seminar taken live (so “typos ahead!”).

Looking at gathering data from social network sites and on a research project on contagion in digital data.

Data access requires knowledge of the APIs for each platform but Apigee details the APIs of most social networks (although as an intermediary, this may lead to further issues in interfacing different software tools, e.g., Python tool kits may assist in accessing APIs directly rather than through Apigee). In their research, Twitter data was extracted using Python tools such as Tweepy (calls to Twitter) and NetworkX (a Python library for SNA) along with additional libraries including Apigee. These tools allow the investigation of different forms of SNA beyond ego-centric analysis.

Pablo presented a network diagram from Twitter using NodeXL as ego-networks but direct access to Twitter API would give more options in alternative network analysis . Diffusion of information on Twitter was not possible on NodeXL.

Used three degrees of influence theory from Christakes & Fowler 2008. Social influence diffuses to three degrees but not beyond due to noisy communication and technology/ time issues leading to information decay. For example, most RTs take place within 48 hrs so tends not to extend beyond a friends, friends friend! This relates to network instability and loss of interest from users beyond three degrees alongside increasing information competition as too intense beyond three degrees to diffusion decomposes.

The  direct research found a 3-5% RT rate in diffusion of a single Tweet. RT rates were higher with the use of a hashtag and correlate to the number of followers of the originator but negatively correlates to @_mentions in the original Tweet. This is possibly as a result of @_mentions being seen as a private conversations. Overall, less than 1% of RTs went beyond three degrees.

Conclusion is that diffusion in digital networks is similar to that found in physical networks which implies that there are human barriers to communication in online spaces. But the research is limited due to the limits on access to Twitter API as well as privacy policies on Twitter API. Replicability becomes very difficult as a result and this issue is compounded as API versions change and so software libraries and tools no longer work or no longer work in the same way. Worth noting that there is no way of knowing how Twitter samples the 1% of Tweets provided through the API. Therefore, there is a need to access 100% of the Twitter data to provide a clear baseline for understanding Twitter samples and justify the network boundaries.

Points to importance that were writing code using R/ Python preferable as easier to learn and with larger support communities.

LinkPool [20140318]

Open Education Trends Report provides a useful set of insights on current trends in MOOCs in particular and open education in general. This is clearly a rapidly emergent area and so the report provides a useful summary of ‘where things are at’ put prediction is necessarily less clear. This is a useful report with lots to consider including:

1. MOOCs and professional learning with a focus on the views of learners and employers is interesting a points to the need for further research in this area

2. the growth of apps, with the open CME (continuing medical education) app looking particularly good. As the report states:

However, the ideal ‘my (open) education app’ will require far more extensive functionality. An ideal app should offer access to both open and closed education, be independent of any particular MOOC platform or brand, allow the user to search for open educational resources and OpenCourseWare, issue alerts when a suitable product has been found, update the user’s portfolio, feature social education network tools, accommodate both formal and informal learning and administer examinations on the basis of identity verification. Although this may seem like an extensive wish list, such solutions are not far off.

3. a clear summary of the challenges of integrating open education and the qualification based education value network

4. a useful list of MOOC research sites and portals for those interested in an evidence-based approach

5. the brief discussion of learning analytics.

Heutagogy, self-directed learning and complex work is a useful commentary on learning and working in contexts of complexity (which relates to my earlier post on informal learning and  ‘wicked problems’ . I’m not familar with heutagogy but the quote given in the post suggests heutagogy is well adapted to working in emergent contexts of complexity (or wickedness!):

“Heutagogy applies a holistic approach to developing learner capabilities, with learning as an active and proactive process, and learners serving as “the major agent in their own learning, which occurs as a result of personal experiences” (Hase & Kenyon, 2007, p. 112).

As the post suggests, heutagogy is closely aligned with capability as:

“Capable people are those who: know how to learn; are creative; have a high degree of self-efficacy; can apply competencies in novel as well as familiar situations; and can work well with others. In comparison with competencies which consist of knowledge and skills, capability is a holistic attribute.”
This points back to the importance of ‘learning to learn’ as a perhaps the only truly transferable competence.For me, this is a key challenge for education and the discourses on employability.

Working and learning in networks

I’m currently pulling together various thoughts on issues surrounding organisational design, networks and workplace or occupational learning. Initially, I’m drawing on:

the notion of learning networks, defined by Sloep (2008) as: “online, social network that is designed to support non-formal learning in a particular domain” to frame a discussion of the use of social technologies for workplace learning and the management of knowledge. In particular, the affordances of social technologies in enabling learning outcomes traditionally seen as vicarious by-products of work activities to be captured and made explicit as micro-learning objects (Peschl 2006; Schmidt 2005), will be explored in the context of professional learning that focuses on responding to complex and ‘wicked’ problems (Margaryan et al, 2013).

From this, I’m looking to explore

C2L-LC_comb

… how technology enabled learning networks act as mechanisms for personal professional competence development. How might or how do professionals combine and use self-selected digital tools to support the integration of work and learning as Personal Learning Environments (PLEs) (Pata 2009; Ralagopal, et al 2012) and approaches to Personal Knowledge Management (PKM) (Redecker 2009)?

So I *think* the argument I’m developing is that increasingly for *some* occupations, workplace learning is in practice operationalised as a ‘web of relations’ (Fenwick 2008) within and across organisational and professional boundaries and so the long-standing practices of L&D functions are increasingly redundant in this context. By extension, I’d suggest that there are various implications arising form this for much higher education provision: for example, is the privileging of knowledge content really justified, can the assumptions that students are effective learners in such a context justified, where or what may indicate knowledgeable authority in such a context?

LinkPool [16012014]

I’ve been back to work for four days now but today was the first day of feeling inspired and quite happy to be back (possibly due to ‘home improvement’ hassles earlier in the week). Anyway, this is not an extensive post but I found a couple of useful reads this week:

An e-learning strategy framework caught my eye mainly for the statement that:

I realized that this manager was under the impression that her learning management system (LMS) was her e-learning strategy. Several years ago, Brandon Hall said that an “LMS is the lynch-pin of an e-learning strategy,” but technology alone is not a strategy.

Which is a nice illustration of the common problem of technological determinism. But the framework presented discussed organisational goals, MarComms, administration, audiences and finance yet nothing on pedagogy. Can an e-learning strategy framework that doesn’t address questions of how users learn be adequate?

The Vulnerability of Learning from @gsiemens via @mhawksey caught my eye as something rarely stated but very true:

Learning is vulnerability. When we learn, we make ourselves vulnerable. When we engage in learning, we communicate that we want to grow, to become better, to improve ourselves.

And the same can be said of other valuable learning processes of creativity and innovation – there is a link between making oneself vulnerable and doing what is valuable. As George suggests, the logic of efficiency may well end up destroying what makes learning valuable personally and socially.