IT Futures at Edinburgh
I’m attending the IT Futures conference at Edinburgh today. These notes are not intended to be a comprehensive record of the conference but to highlight points of interest to me and so will be subjective and partial.
A full recoding of the conference will be available at the IT Futures website
The conference opens with an address from the Principal, Sir Timothy O’Shea with an opening perspective:
Points to the strengths of the University in computing research, super-computing and so on, and ‘ludicrously’ strong in e-learning with 60 plus online postgraduate programmes. In these areas, our main competitors are in the US rather than the UK.
Beginning with a history of computing from the 19402 onwards. Points to Smallwood and using computers to self-improving teaching and Papert on computing/ e-learning for self-expression. 1980s/90s digital education was dominated by the OU. 1990s the rise of online collaborative learning was an unexpected development that addressed the criticisms that e-learning (computer assisted learning) lacked interactive/ personalisation elements.
Argues that the expansion of digital education has been pushed by technological change rather than pedagogical innovation. We still refer to the constructivism of Vygotsky while technology innovation has been massive.
How big is a MOOC?
– 100 MOOCs is about the equivalent in study hours of a BA Hons. A MOOC is made up of a 1000 minnows (I think this means small units of learning. MOOCs are good for access as tasters and to test e-learning propositions. They also contribute to the development of other learning initiatives, enhance the institutional reputations including relevance through ‘real-time MOOCs’ such as on the Scottish referendum. MOOCs provide a resource for learning analytics.
So e-learning is mature, not new, and blended learning is ‘the new normal’ and dominated by the leading university brands of MIT, Stanford, etc. A huge contribution of e-learning is access.
A research agenda: to include modelling individual learning, including predictive learning support; speed of feedback; effective visualisation; supporting collaboration; understanding Natural Language; location of the hybrid boundary (eg, in practical tests); personal programming (coding) and how realistic is it for meaningful coding skills for the non-geeks to be developed.
Open questions are around data integrity and ownership; issues of digital curation; integration of data sources; who owns the analysis; should all researchers be programmers?; and how to implement the concept of the learner as researcher?
Question about artificial intelligence: Answer – Tim O’Shea’s initial research interest was in developing programmes that would teach intelligently – self-improving teachers – but using AI was too difficult and switched towards MIT’s focus on self-expression and for programmers to understand what their codes were doing. Still thinks the AI route is too difficult to apply to educational systems.
Q: surprised by an absence of gaming for learning?
A: clearly they can and cites Stanford on influence of games on learning motivation
Q: on academic credit and MOOCs
A: Thinks this is inevitable and points to Arizona State University which is attempting to develop a full degree through MOOCs. Can see inclusion of MOOCs in particular postgraduate programmes – heuristic of about a third of a Masters delivered via (external) MOOCs but more likely to be taken forward by more vocational universities in the UK – but using MIT or Stanford MOOCs replacing staff!.
Now moving on to Susan Halford on ‘Knowing Social Worlds in the Digital Revolution’:
Researches organisational change and work and digital innovation. Has not directly researched changes in academic work but has experienced them through digital innovation. Digital innovation has kick-started a revolution in research through data volume, tracking, analyse and visualise all sorts of data. So data becomes no longer used to research something but is the object of social research.
Digital traces may tell us lots about how people live, live together, politics, attitudes, etc. Data capturing social activities in real time and over time rather than replying on reporting of activities in interviews, surveys and so. At least, that is the promise and there are a set of challenges to be addressed to realise the potential of these data (also see this paper from Prof Halford).
Three key challenges: definition; methods and interdisciplinarity
Definition– what are these digital data: these are not naturally occurring and do not provide a telescope to social reality. Digital data is generated through mediation by technology and so is not naturally occurring. In the case of Twitter, a huge amount of data, but is mediated by technological infrastructure that packages the data. The world is, therefore, presented according to the categories of the software – interesting but not naturally-occurring data. Also, social media generate particular behaviours and are not simply mirrors of independent social behaviour – gives the example of the ReTweet.
Also, there is the issue of prominence and ownership of data. Survey data often is transparent in the methods used to generate data and therefore, the limits of the claims that can be made from the data. But social media data is not transparent in how it is generated – the data is privately owned where data categories and data stream construction is not transparent. We know that there is a difference between official and unofficial data. We do not know what Twitter is doing with its data but that it is part of an emerging data economy. So this data is not neutral and is the product of a series of technological and social decision-making that shapes the data. We need to understand the socio-technical infrastructure that created them.
Method – the idea that in big data, the numbers speak for themselves is wrong: numbers are interpreted. The methods we have are not good for analysis of large data. Research tends towards small scale content analysis or large scale social network analysis but neither are particularly effective at understanding the emergence of the social over time – to harness the dynamic nature of the data. A lot of big data research on Twitter is limited to mathematical structures and data mining (and is a-theoretical) but is weak on the social aspects of social media data.
Built a tool and Southampton to dynamically map data flows through ReTweeting.
Interdisciplinariety: but is a challenge to operationalise inter-disciplinarity.
Disciplines imagine their object of study in (very) different ways and with different forms of cultural capital (what is the knowledge that counts – ontological and epistemological differences). So the development of interdisciplinarity involves changes on both sides – researchers need to understand programming and computer scientists need to understand social theory. But also need to recognise that some areas cannot be reconciled.
Interdisciplinarity leads to questions of power-relations in academia that need to be addressed and challenged for inter-disciplinarity to work.
But this work is exciting and promising as a field in formation. But also rises for responsibilities: ethical responsibilities involved in representing social groups and societies and data analytics; recognising digital data excludes those who are not digitally connected; data alone is inadequate as social change involves politics and power.
Now Sian Bayne is responding to Prof Halford’s talk: welcomes the socio technical perspective taken and points to a recent paper: “The moral character of cryptographic work” as generating interest across technical and social scientists.
Welcomes the emphasis of interdisciplinarity while recognising the dangers of disciplinary imperialism.
What actions can be taken to support interdisciplinarity?
A: share resources and shared commitments are important. Also academic structures are important and refers to the REF structures against people submitting against multiple subjects. (but is is pointed out that joint submissions are possible).
Time for a break ….
We’re back with Bernard Schafer of the School of Law talking on the legal issues of automated databases. Partly this is drawn from a PG course on the legal issues of robotics.
The main reference on the regulation of robots is Terminator but this is less worrying than Short Circuit, eg, when the robot reads a book, does it create a copy of it, does the licence allow the mining of the data of the book, etc. See the Qentis hoax. UK is the only country to recognise copyright ownership of automatically generated works/ outputs but this can be problematic for research, can we use this data for research?
If information wants freedom, does current copyright and legal frameworks support and enable research, teaching, innovation, etc? Similar issues arose form the industrial revolution.
Robotics replacing labour – initially labour but now examples of the use of robots in teaching at all levels.
But can we automate the dull part of academic jobs. But this creates some interesting legal questions, ie, in Germany giving a mark is an administrative act similar to a police caution and is subject to judicial review, can a robot undertake an administrative act in this way?
Lots of interesting examples of automatic education and teaching digital services:
Good question for copyright law is what does ‘creativity’ mean in a world share with automatons? For example, when does a computer shift from thinking to expressing an idea which is fundamental to copyright law?
Final key question is: “Is our legal system ready for automated generation and re-use of research?”
Now its Peter Murray-Rust on academic publishing and demonstrating text or content mining of chemistry texts.
…And that’s me for the day as I’m being dragged off to other commitments.