Tag Archives: PhD

PhD Abstract: Twitter chat events & the making of a professional domain

Here is the latest draft of a one page abstract of my PhD:

Distributed online discussion events in social media are increasingly used as sites for open, informal professional development, knowledge sharing and community formation. Synchronous chat events hosted on Twitter have become particularly prominent in a number of professional domains. Yet theoretical and critical analysis of these Twitter chat events has, to date, been limited: this thesis contributes to the development of such analysis through a socio-material, network assemblage lens employing trans-disciplinary and multi-method research approaches. This research positions the Twitter chat events as the relational effects of network-assemblages of human and non-human actants.A picture of various draft word processed documents

This thesis explores Twitter chat events with a particular focus on human resource development (HRD) as a professional domain that is widely seen as inherently changeable, fluid, contested and continually emergent. This study examines how practitioner-generated reportage of professional practice and the specific functions of Twitter intra-act to generate a particular definition of HRD as a professional field of practice.

A combination of descriptive statistics, Social Network Analysis and analysis of the content and structure of the Chat events has been employed in researching 32 separate chat events with 12,061 tweets. The research methods generated multiple readings of the research data and surfaced different and fluid potential lines of enquiry in to the Twitter chat events. A number of these potential lines of enquiry were then selected as points of entry to ‘zoom in’ to the data using a critical discourse analysis for a smaller sample of the Chat events.

A key finding of the research is that the Twitter chat events seek to generate an idealised archetype of HRD bounded by a stable set of dominant practices. This idealised archetype is positioned in contrast to a repertoire of common HRD practices presented as illegitimate in this professional grouping. A second key finding relates to the chat event assemblages as collective achievements involving human and non-human actants. The collective effects surfaced in the research problematise (a) the notion of online communities as the product of network ties and (b) the individualist orientations of much of the literature on professional learning.

It is further argued here that the entanglement of the particular technologies and functions of Twitter and the discursive structures and strategies mobilised in the Chat events creates tensions between discursive territorialisation and stabilisation of particular discourses of professional identity and meaning-making and the deterritorialisation, fragmentation and fluidity unscripted in to Twitter itself.

IT Futures at Edinburgh

I’m attending the IT Futures conference at Edinburgh today. These notes are not intended to be a comprehensive record of the conference but to highlight points of interest to me and so will be subjective and partial.

A full recoding of the conference will be available at the IT Futures website

The conference opens with an address from the Principal, Sir Timothy O’Shea with an opening perspective:

Points to the strengths of the University in computing research, super-computing and so on, and ‘ludicrously’ strong in e-learning with 60 plus online postgraduate programmes. In these areas, our main competitors are in the US rather than the UK.

Beginning with a history of computing from the 19402 onwards. Points to Smallwood and using computers to self-improving teaching and Papert on computing/ e-learning for self-expression. 1980s/90s digital education was dominated by the OU. 1990s the rise of online collaborative learning was an unexpected development that addressed the criticisms that e-learning (computer assisted learning) lacked interactive/ personalisation elements.

2000 saw the rise of OERs and MOOCs as a form of providing learning structure around OERs. Also noted the success of OLPC in Uruguay as one of the few countries to effectively implement OLPC.

Argues that the expansion of digital education has been pushed by technological change rather than pedagogical innovation. We still refer to the constructivism of Vygotsky while technology innovation has been massive.

How big is a MOOC?
– 100 MOOCs is about the equivalent in study hours of a BA Hons. A MOOC is made up of a 1000 minnows (I think this means small units of learning. MOOCs are good for access as tasters and to test e-learning propositions. They also contribute to the development of other learning initiatives, enhance the institutional reputations including relevance through ‘real-time MOOCs’ such as on the Scottish referendum. MOOCs provide a resource for learning analytics.

So e-learning is mature, not new, and blended learning is ‘the new normal’ and dominated by the leading university brands of MIT, Stanford, etc. A huge contribution of e-learning is access.

A research agenda: to include modelling individual learning, including predictive learning support; speed of feedback; effective visualisation; supporting collaboration; understanding Natural Language; location of the hybrid boundary (eg, in practical tests); personal programming (coding) and how realistic is it for meaningful coding skills for the non-geeks to  be developed.

Open questions are around data integrity and ownership; issues of digital curation; integration of data sources; who owns the analysis; should all researchers be programmers?; and how to implement the concept of the learner as researcher?


Question about artificial intelligence: Answer – Tim O’Shea’s initial research interest was in developing programmes that would teach intelligently – self-improving teachers – but using AI was too difficult and switched towards MIT’s focus on self-expression and for programmers to understand what their codes were doing. Still thinks the AI route is too difficult to apply to educational systems.

Q: surprised by an absence of gaming for learning?

A: clearly they can and cites Stanford on influence of games on learning motivation

Q: on academic credit and MOOCs

A: Thinks this is inevitable and points to Arizona State University which is attempting to develop a full degree through MOOCs. Can see inclusion of MOOCs in particular postgraduate programmes – heuristic of about a third of a Masters delivered via (external) MOOCs but more likely to be taken forward by more vocational universities in the UK – but using MIT or Stanford MOOCs replacing staff!.

Now moving on to Susan Halford on ‘Knowing Social Worlds in the Digital Revolution’:

Researches organisational change and work and digital innovation. Has not directly researched changes in academic work but has experienced them through digital innovation. Digital innovation has kick-started a revolution in  research through data volume, tracking, analyse and visualise all sorts of data. So data becomes no longer used to research something but is the object of social research.

Digital traces may tell us lots about how people live, live together, politics, attitudes, etc. Data capturing social activities in real time and over time rather than replying on reporting of activities in interviews, surveys and so. At least, that is the promise and there are a set of challenges to be addressed to realise the potential of these data (also see this paper from Prof Halford).

Three key challenges: definition; methods and interdisciplinarity

Definition–  what are these digital data: these are not naturally occurring and do not provide a telescope to social reality. Digital data is generated through mediation by technology and so is not naturally occurring. In the case of Twitter, a huge amount of data, but is mediated by technological infrastructure that packages the data. The world is, therefore, presented according to the categories of the software – interesting but not naturally-occurring data. Also, social media generate particular behaviours and are not simply mirrors of independent social behaviour – gives the example of the ReTweet.

Also, there is the issue of prominence and ownership of data. Survey data often is transparent in the methods used to generate data and therefore, the limits of the claims that can be made from the data. But social media data is not transparent in how it is generated – the data is privately owned where data categories and data stream construction is not transparent. We know that there is a difference between official and unofficial data. We do not know what Twitter is doing with its data but that it is part of an emerging data economy. So this data is not neutral and is the product of a series of technological and social decision-making that shapes the data. We need to understand the socio-technical infrastructure that created them.

Method – the idea that in big data, the numbers speak for themselves is wrong: numbers are interpreted. The methods we have are not good for analysis of large data. Research tends towards small scale content analysis or large scale social network analysis but neither are particularly effective at understanding the emergence of the social over time – to harness the dynamic nature of the data. A lot of big data research on Twitter is limited to mathematical structures and data mining (and is a-theoretical)  but is weak on the social aspects of social media data.

Built a tool and Southampton to dynamically map data flows through ReTweeting.

Interdisciplinariety: but is a challenge to operationalise inter-disciplinarity.

Disciplines imagine their object of study in (very) different ways and with different forms of cultural capital (what is the knowledge that counts – ontological and epistemological differences). So the development of interdisciplinarity involves changes on both sides – researchers need to understand programming and computer scientists need to understand social theory. But also need to recognise that some areas cannot be reconciled.

Interdisciplinarity leads to questions of power-relations in academia that need to be addressed and challenged for inter-disciplinarity to work.

But this work is exciting and promising as a field in formation. But also rises for responsibilities: ethical responsibilities involved in representing social groups and societies and data analytics; recognising digital data excludes those who are not digitally connected; data alone is inadequate as social change involves politics and power.

Now Sian Bayne is responding to Prof Halford’s talk: welcomes the socio technical perspective taken and points to a recent paper: “The moral character of cryptographic work” as  generating interest across technical and social scientists.

Welcomes the emphasis of interdisciplinarity while recognising the dangers of disciplinary imperialism.


What actions can be taken to support interdisciplinarity?

A: share resources and shared commitments are important. Also academic structures are important and refers to the REF structures against people submitting against multiple subjects. (but is is pointed out that joint submissions are possible).

Time for a break ….


We’re back with Bernard Schafer of the School of Law talking on the legal issues of automated databases. Partly this is drawn from a PG course on the legal issues of robotics.

The main reference on the regulation of robots is Terminator but this is less worrying than Short Circuit, eg, when the robot reads a book, does it create a copy of it, does the licence allow the mining of the data of the book, etc. See the Qentis hoax. UK is the only country to recognise copyright ownership of automatically generated works/ outputs but this can be problematic for research, can we use this data for research?

If information wants freedom, does current copyright and legal frameworks support and enable research, teaching, innovation, etc? Similar issues arose form the industrial revolution.

Robotics replacing labour – initially labour but now examples of the use of robots in teaching at all levels.

But can we automate the dull part of academic jobs. But this creates some interesting legal questions, ie, in Germany giving a mark is an administrative act similar to a police caution and is subject to judicial review, can a robot undertake an administrative act in this way?

Lots of interesting examples of automatic education and teaching digital services:Screen Shot 2015-12-17 at 12.10.02





Good question for copyright law is what does ‘creativity’ mean in a world share with automatons? For example, when does a computer shift from thinking to expressing an idea which is fundamental to copyright law?

Final key question is: “Is our legal system ready for automated generation and re-use of research?”

Now its Peter Murray-Rust on academic publishing and demonstrating text or content mining of chemistry texts.

…And that’s me for the day as I’m being dragged off to other commitments.

The Twitter Experience

For all the structuring effects of the Twitter functional features, the Twitter experience is generally perceived as a private one as only the individual user can see their Twitter feed, as they have structured it, on their particular screen configuration (Gillen and Merchant 2013). This aspect of the individualisation and heterogeneity of public and open textual communication adds to the complexities of interpreting, analysing and making sense of Twitter. Gillen and Merchant’s (2013) discussion of the capacity of Twitter users to organise the flow of discourses they are presented seems to ignore both the algorithmic impositions of, for example, Trending terms in that interface as well as the effects of the content of individual Tweets being perceived as a coherent informational flow or a chaotic mess of impressions (or both). The Twitter user experience is not an isolated or individualised one but is, rather, an entanglement of heterogeneous intentions, business logics, coded protocols, algorithmic outputs, collective norms and individual perceptions.

It is this entanglement between the human and material that opens, closes and patterns or orders the particular uses of Twitter. Twitter is constantly and actively made and remade in the intra-actions of user behaviours, hardware, coding, algorithms and visual design, rather than Twitter being a neutral utility or passive instrument.

Weeknotes 26062015

This has been a week of knuckling down at getting stuff done – but I also squeezed in one day off as the last day available before the schools break for summer (schools in Scotland start the holidays at the end of June but return mid-August which still feels so wrong to me). What I did this week:

attended a briefing session on the University’s process for academic promotion (lots of paperwork and pretty tough criteria)
had numerous dissertation supervision sessions on Skype
progress on my PhD writing an overview of research in to Twitter and the dominant approaches based on quantitative methods of statistical analysis and Social Network Analysis or studies based on conversation analysis. The comparative lack of qualitative research is a notable omission especially in considering the affective dimensions of online ‘communities’

A couple of links of interest from this week:
No, Sesame Street Was Not The First MOOC from Audrey Watters is a great post on open education, the history of MOOCs, the insurgency of venture capital in EdTech and the importance of theory and research in education (and some good pics of Bert & Ernie).

Mark Carrigan’s post on using a blog as a research journal is a useful overview of the purpose of a research journal as well as the benefits of working out loud.

ReCon, Research in the 21st Century: Data, Analytics and Impact

So here we are at ReCon, Research in the 21st Century: Data, Analytics and Impact at the University of Edinburgh’s Business School. I’ll be taking notes here throughout the day but these will be partial and picking up main points of interest to me.

The conference is opening with Jo Young from the Scientific Editing Co giving the welcome and introduction to the event.

The first session is from Scott Edmunds from GigaScience on “Beyond Paper”. Has the 350 year old practices of academic publishing had its day and is the advertising of scholarship & formulated around academic clickbait. Taken to extremes, we can see the use of bribery around impact factors, writing papers to order, guaranteed publications etc. This has led to an increase in retractions (x15 in the last decade) so that by 2045 as many paper will be retracted as published and then we’re into negative publishing.
We need to think of new systems of incentives and we now have the infrastructure to do this especially data publishing such as Giga Science provide.
Giga Science has own data publishing repository as well as an open access journal with open and transparent review process. Open data and data publishing is not new and was how Darwin worked through depositing collections in museums and publishing descriptions of finds before the analysis that led to Origin of the Species.
Open data has a moral imperative regarding data on natural disasters, disease outbreaks and so forth. Releasing data leads to sharing of data and analysis of that data for examples on Ecoli Genome analysis. Traditional academic outputs were created but it is also used as an example of the impact of open data. See the Royal Society report here. The crowd sourced approach to genome sequencing is being used in, eg, Ebola, in rice genomes addressing the global food crisis. But publishing of analysis remains slow and needs to be closer to realtime publishing.
So we’re now interesting in executable data looking at the research cycle of interacting data and analysis leading to publications at micro and nano publications that retain DOIs. Alot of this is collected on GitHub.
Also looking at the sharing of workflows using the Galaxy system and again, giving DOIs to particular workflows (see GigaGalaxy), sharing virtual machines (via Amazon).
Through analysis of published papers found how rates of errors but also that replication was very costly.
So the call is “death to the publication, long live the research object” to rewards replication rather than scholarly advertising.

Question: how is the quality of the data assured?
Journal publications are peered reviewed and do checks using own data scientists. While open data is not checked. Tools are available and being developed that will help improve this.

Now on to Arfon Smith from GitHub on Predicting the future of Publishing. Looking at open source software communities for ideas that could inform academic publishing. GitHub is a solution to the issues of version control for collaboration using Git technology. People use GitHub for different things: from single files, through to massive software projects involving 7m + lines of codes. There are about 24m projects on GitHub and is often used by academics.
Will be talking about the publication of software and data rather than papers. Assumptions for the talk are: 1. open is the new normal; 2. the PDF is increasingly unsatisfactory way of sharing research; and 3. we are unprepared to share data and software in useful ways.
GitHub especially being used in data intensive sciences. There is the argument that we are moving in to a new paradigm of sciences beyond computational data in to data intensive sciences (data abundance) & Big Science.
Big Science requires new tools, ways of working and ways of publishing research. But as we become more data intensive, reproducibility declines under traditional publishing. In the biosciences, many methods are black boxed and so it is difficult to really understand the findings – which is not good!
To help, GitHub have a guide on how to cite code by giving a GitHub repository a DOI (via Zenodo) for academics.
From open source practices that are most applicable are:
1. rapid verification, eg, through verification of pull-requests where the community and 3rd party providers undertaking testing or using metrics that check the quality of the code, eg, Code Climate. So verification can and should be automated and open source is “reproducible by necessity”. So in academia we can see the rise of benchmarking services – see for example, Recast or benchmarking algorithm performance.
2. innovation in where there are data challenges by drawing on a culture of reuse around data products to filter out noise in research to enable focus on the specific phenomena of interest (by elimination by data from other analysis)
3. Normal citations are not sufficient for software. Academic environments do not reward tool builders. So there is an idea of distributing credit to authors, tools, data and previous papers. So makes the credit tree transparent and comprehensive.
These innovations depend on the forming of communities around challenges and/ or where open data is available.
Screen Shot 2015-06-19 at 10.38.51
The open software community have developed a number of solutions for the challenges faced in academic publishing.

Now we’ve moved on to Stephanie Dawson, CEO, ScienceOpen on “The Big Picture: Open Access content aggregators as drivers of impact” – which is framed in terms of information overload which is a growth trend that is not going to go away. The is reinforced by an economic advantage open access of publishing more along with increased interest in open data, micro-publications etc At the same time, the science information market is extending to new countries such as India, Brazil & China.
Discovery is largely through search engines, indexing services (Scopus, Web of Science), personal and online networking (conferences, mendeley) and so one. But these do not rank knowledge providing reputation, orientation, context, inspiration.
Current tools: journal impact factor but this is a blunt tool that doesn’t work at the individual paper level but is still perceived as important for academics – and for publishers as pricing correlates to impact factor. Article based tools such as usage and dissemination metrics are common.
There is an opportunity for open access to make access to published papers easier that may undermine publishing paywalls and encourage academics to look to open access channels. But open access publications are about 10% of total and on a lower growth trajectory. So there needs further incentives for academics to support open access publications.
Open Science is an open access communication platform with 1.5m open access articles, social networking and collaboration tools. The platform allows commenting, dissemination, reviewing or ‘liking’ an article. Will develop an approach to enable the ranking of individual articles that can be bundled with others, eg, by platform users, or by publishers [so there is a shift towards alternative and personalised forms of article aggregation that can be shared as collections?].

Question: impact factors can be gamed as can alternative metrics. What is key is the quality of the data used and analysis – metrics for how believable articles are?

We’re looking at how to note reproducibility of article findings but these aren’t always possible so edited collections based are a way forward.

Q: this issue of trust is not about people but should be about the data and analysis and the transparency of these – how the data came about?

So there is a need to rethink how methods sections are written. We’re also enhancing the transparency of the review process.

The final session on this section is Peter Burnhill, Director, EDINA on “Where data and journal content collide: what does it mean to ‘publish your data’?”. Looking at two case studies:
1. project on reference rot (link rot+content drift) to develop ways of archiving the web and capturing how sites/ urls have changed over time. Tracked the growth in web citations in academic articles and found 20%+ of urls are ‘rotten’ and original pages cited have disappeared including from open archives. A remedy is to use reference management software to snapshot and archive web pages at time of capture. The project has developed a Zotero plug-in to do this (see video here).
2. an ongoing project on url preservation by publishers. There are many smaller publishers that are ‘at risk’ of being lost. Considers data as working capital (that can be private as work-in-progress) or as something to be shared.
The idea of open data is not new to science and can be seen in comments on science from the 19th Century.
The web and archiving problematises the issues of fixity and malleability of data.

We’re back following a brief coffee break.

Next up is Steve Wheeler on “The Future is Open: Education in the digital age”. Will be talking about ‘openness’ and what we do with the content and knowledge that we produce and have available. Publishing is about educating our community and so should be as open as possible and for freely accessible to better educate that community.
Pedagogy comes first and technology are the tools: we don’t want technological determinism. You have to have a purpose in subscribing to a tool – technology is not a silver bullet.
“Meet Student 2.0”: has been using digital tools at six months old onwards. Most of our students are younger than Google! and are immersed in the digital. But I don’t follow the digital natives idea but do see merit in the digital residents and visitor concept from White and Le Cornu.
Teachers fearing technology: 1 how to make it work; 2 how to avoid looking like an idiot; 3. they ‘ll know more then me. For learners the concerns are about access to WiFi and power. Uses the example of the floppy disk recognised as the save icon but not as a storage device.
Students in lectures with laptops as ‘windows on the world’ to check on and expand on what is being presented too them. But what do these windows do: find information, engage in conversations. Another example is asking about a text on Twitter leads to a response directly from the author of that text. UNESCO talks about communities of users (2002).
Openness is based on the premise of sharing and becomes more prominent as technology makes sharing possible at scale. mentions Martin Weller’s Battle for Open and how openness as an idea has ‘won’ but implementation still has a lot stil to do.
Community is key based on common interest rather than proximity – as communities of practice and of interest. Online, en masse reduces the scope for anonymity and drives towards open scholarship where the academic opens themselves up for constructive criticism. Everything can be collaborative if we want it to be.
Celebration, connection, collaboration and communication all goes into User Generated Content (UGC). Defines UGC as having *not* been through peer review but there is peer review through blog comments, Wikipedia, Twitter conversations. Notes Wikipedia as the largest human Rhizomatic structure in the world.
Moving on to CopyLeft and the Creative Commons. Rheingold on networking as a key literacy of the 21st Century in terms of amplifying your content and knowledge.
Communities of Learning and professional learning networks – with a nod to six degrees of separation but thinks it is down to two to three degrees as we can network to people much easier. Collaborative Open Networks where information is counted as knowledge if it is useful to the community. David Cormier (2007) on Rhizomatic knowledge that has no core or centre and the connections become more important than the knowledge. Knowledge comes out of the processes of working together. This can be contrasted with the closed nature of the LMS/ VLE and students will shift as much as possible to their personal learning environments.
Have to mention MOOCS ad the original cMOOCs were very much about opening content on a massive scale and led by students. The xMOOC has closed and boxed the concept and generating accusations of a shallow learning experience.
Open access publishing. Gives the example of two papers of his, one was in an open access journal that underwent open peer review. The original paper, the reviewer comments, the response and the final paper were published – open publishing at its best!But the other paper was to a closed journal and took three years to publish – the open journal took five months. The closed journal paper has 27 citations against 1023 for the open journal.
Open publishing amplifies your content, eg, the interactions generated through sharing content on SlideShare. His blog has about 100k readers a month and is another form of publication and all available under Creative Commons.
This is about adaptation to make our research and knowledge more available and more impactful.

Question: how are universities responding to openness.
It depends on the universities’ business model – cites the freemium model with a basic ‘product’ being available for free. In the example of FutureLearn is giving away partner content for free with either paid for certification or as a way of enhancing recruitment to mainstream courses.

Now time for lunch

Now back and looking at measuring impact with Euan Adie from altmetric
Using the idea of impact of research is about making a difference. Impact include quality: rigour, significance, original, replicable
attention: the right people see it
impact: makes a difference in terms of social, economic, cultural benefits.

REF impact is assessed on quality and impact. A ‘high impact journal’ assumes the journal is of quality and the right people see it (attention).

Impact is increasingly important in research funding across the world. And it is important to look at impact.

Traditional citations counts measure attention – scholars reading scholarship.

Altmetrics manifesto – acknowledge that research is available and used online then we can capture some measures of attention and impact (not quality). This tends to look at non-academic attention through blog posts and comments, Tweets, newspapers; and impact on policy-makers. But what this gives is data but a human has to interpret it and put it in to context via narrative.

Anna Clements on the university library at St Andrews University. What are the policy drivers for the focus on data: research assessments, open access requirements (HEFCE, RCUK) and research data management policies (EPSRC, 2015). Which required HE to focus on the quality of research data with a view to REF2020, asset exploitation, promotion and reputation and managing research income – as well as student demand/ expectations especially following the increase in fees. So libraries are taking lead in institutional data science within the context of financial constraints and ROI and working with academics.
Developing metrics jointly with other HEIs as snowball metrics involving UK, US and ANZ as well as publishers and the metrics are open and free to use.

Kaveh Bazargan from River Valley Technologies on “Letting go of 350 years’ legacy – painful but necessary”. The company specialises in typesetting heavy maths texts. But has more recently developed publishing platforms.

weeknotes [20102014]

Over the last few weeks, I’ve been

further working through my research involving discourse analysis along with network and other sociomaterial methods for my PhD. I think I’m developing a stronger understanding of of the method “in action” and Technology Enhanced Learning.

I’m also continuing to enjoy the teaching on two courses: Digital Environments for Learning; and Course Design for Digital Environments.

I’m also continuing to contribute to the development of two initiatives which I’ll hopefully write about sometime soon.

What is wrong with ‘Technology Enhanced Learning’

Last Friday I attended a Digital Cultures & Education research group presentation by Sian Bayne on her recent article What’s the matter with ‘Technology Enhanced Learning’?

These are my notes taken during the presentation and then tidied up later – so they may well be limit, partial and mistaken!


16th century French cypher machine in the shape of a book with arms of Henri II. Image from Uploadalt

While Technology Enhanced Learning (TEL) is a widely used term in the UK and Europe, the presentation positions TEL as an essentially conservative term that discursively limits what we do as researchers and researchers in the field of digital education and learning. Sian’s critique draws on three theoretical perspectives:

* Science & Technology Studies (STS) for a critique of ‘Technology
* Critical posthumanism for a critique of ‘Enhancement
* Gert Biesta’s language of learning for ‘Learning

For Technology, we dont tend to define it but rather black box it as unproblematically in service to teaching practices. This black-boxing of technology as supporting learning and teaching creates a barrier between the technology and the social practices of teaching. As Hamilton & Friesen discuss, two main perspectives on technology as either as an essentialist perspective of unalienable qualities of the technologies or we treat it instrumentally as a neutral set of tools. I both cases technology is understood as being independent of the social context in which it is used. Hamilton & Friesen argue we need to take a more critical stance especially in terms of technology as the operationalisation of values and to engage in larger issues such as social justice, the speed of change and globalisation, the nature of learning or what it is to be human.

By using the term, Enhanced, TEL adopts a conservative discourse as it assumes there is no need to radically rethink teaching & learning practices but just a need to enhance of tinker with existing practice. So enhancement aligns with Transhumanism – a humanist philosophy of rationality and human perfectibility where technological advances remove the limitations of being human (Bostrom 2005)
Critical post-humanism (Simon 2003) is a philosophical critique of the humanism of the Enlightenment and its assumptions on human nature and the emphasis on human rationality. arguing that these assumptions are complicit in dominatory practices of opporession and control. The human being is just one component in complex ecology of practice that also includes machines, non-human components in symmetry. So post-humanism is more about humility and appreciation of that our involvement as humans in our context is complex and inter-related and interactional. Yet TEL buys into a dominant Transhumanism emphasising the cognitive enhancement of the mind and so could include the use of drugs as a ’technology’ to enhance learning. The Technology Enhanced Learning System Upgrade report.
Transhumanism positions technology as an object acted on by human subject so ignoring how humans are shaped by and shape technology and does not ask Is ‘enhancement’ good, who benefits from enhancement and is enhancing is context specific? It is argued that TEL could learn from the post humanist critique of Transhumanism

The ‘problem’ of Learning draws on Gert Biesta’s writing on the new language of learning and more specifically, the ‘learnification’ of discourses of education. This involves talking about “learning” rather than “teaching”, or “education”. Learning as a terms is used as a proxy for education that takes discussions away from considerations of structures of power in education itself. So learnification discursively instrumentalises education – education is provided/ delivered to learners based on predefined needs rather than needs emerging and evolving over time. So learners are positioned as customers or clients of education ‘providers’ and TEL gets bound up with this neo-liberal discourse/ perspective

So the label of TEL tacitly subordinates social practice to technology while also ontologically separating the human from the non-human. The TEL discourse is aligned with broader enhancement discourse that enrols transhumanism and instrumentalisation so entrenching a particular view of the relationships between education, learning and technology.

Rather, education technologies involve complex assemblages of human and non-human components and as practitioners and researcher, we need to embrace that complexity. Posthumanism as a stance, is a way of doing this and understanding learning as an emergent property of complex and fluid networks of human and non-human elements coming together. In posthumanism, the human is not an essence but rather a moment.

weeknotes [21092014]

OK, what have I been up to over the last few weeks:

Well the supervision of dissertation students has given way to the marking of dissertations. I can’t say I enjoy marking the dissertations I supervised (and am very glad they’re double marked) but do find interesting reading the dissertations that I haven’t supervised for the first time. … and just in case I thought there would be a pause, I’ve already started the first supervision meetings for a new set of dissertations.

piloting discourse analysis for my PhD studies continues to develop as issues are surfaced and I develop a better understanding of the method “in action”.

The writing of a couple of papers for publications continues. One is near completion and just requires final copy-proofing and permissions on images etc before submission. The other required extensive rewriting (and re-reading it, I did find it a shockingly poor piece of work – writing a short paper seems to be much harder…) and I’m waiting for feedback on that new version.

attended and excellent seminar on Unbundling the University. I hope to return to this topic in the near(ish) future. Interestingly, the imperatives for unbundling appear to be coming to the state schools system in the UK (or at least England) with this example of outsourcing school services involving the Academy Enterprise Trust.

Also, we’re now well and truly into the teaching term with the two courses I’m contributing to this semester: Digital Environments for Learning; and Course Design for Digital Environments


weeknotes [25082014]

Over the last couple of weeks, my time has been spent on:

A picture of various draft word processed documentssupervising Masters students on the dissertations with most submitting last week

working with three part-time students as they start their dissertations

developing a couple of ideas on a new course involving what is, I think, an innovative structure. More to follow on both of these

piloting discourse analysis for my PhD studies which is both interesting and slightly overwhelming – I mean, how much data can I really use?

writing a couple of papers for (hopeful) publication

preparing for teaching starting in a couple of weeks on two online courses: Digital Environments for Learning; and Course Design for Digital Environments

planning a Course for a different programme on Managing Organisational Learning & Knowledge (MOLK) that will be a blended Course starting in January 2015.

attended an interesting workshop on employability for postgraduate students as part of the Making Most of Masters project. The emphasis on employability is being partly driven by changes in the PGT market as student recruitment is counter-cyclical to the economy. Hence the market for PGT students is expected to become more competitive and requiring HEIs to develop key added-value offers to students which often revolve around issues of employability, employment outcomes and employer engagement.
The Making Most of Masters project started with mapping what work-based learning was already taking place, then defining a model for work-based dissertations and delivering and refining the model to finally generate a self-sustaining model. This is essentially a toolkit for running work-based dissertation projects.

The focus for the next couple of weeks will be on finalising the draft papers and preparing for the teaching…. and, of course, marking dissertations….

Personal learning environments

Network ALL2_BC
I’m currently writing up some ideas on open online professional learning that includes considering  personal learning networks. I came across this interesting post from Martin Weller on the apparent decline in interest or discussion of personal learning networks. The reasons suggested include the mainstreaming of the practices associated with PLEs, a consolidation of the tools used in to a fairly generic set of software used but also that the (research) agenda has shifted from personal learning to institutionally provided personalised learning partly driven by learning analytics.