The conference is starting with an address from the Principal, Sir Tim O’Shea on disruptions, predictions and surprises and the need for systematic thinking especially on what really is surprising in teaching, learning and research activities. He is largely to talk about the student experience but points to the importance of IT for research activities is also important, and pointed to the use of computational modelling in the recent chemistry Nobel prizes.
Disruptions described as ‘the pretentious bit’ and lists as disruptions: nouns and verbs; tilling and fire; writing and printing; machines; engines and electricity; telegraph/ phone/ vision and then computers. Notes that the telegraph was hugely disruptive to diplomacy and the role of the ambassador by allowing leader to ‘talk’ direct to leader.
Describes a computer as an amplifier of cognitive abilities. The question is whether MOOCs are disrupters of HE? Reflects that the printing press and the OU did not fundamentally disrupt the lecture-led HE model. So large changes can still be non-disruptive.
The major predictions of:
- Moore’s law that the power of computers will double every 18 months and will stay true for another 8 years;
- Metcalf’s law predicted that the internet will ‘fall over’ early 2000’s due to volume of traffic proved not to be true
- Bayes’ law on probability
- Semantic networks predicted from 1960s so Google should not be described as surprising
- Cloud – first described in 1960s as software as a service
- Intelligent Tutors – look to 1962 for first description of an intelligent tutor.
Minor predictions such as the iPad as a personal portable device along with ICT integration (iPhone); robots; videophone; personalised instruction; cybernetics; and speech recognition predicted decades ago.
So what are the big surprises?
- that Moore’s law is true and Metcale’s law is still false (due to redundancy in the system)
- Facebook and Twitter
- Google Translate using Bayes’ Law
- Very personal computers
- Netscape business model – give the product away for free and work out monetisation later.
Smaller surprises include the World Wide Web; Third World take-up; face recognition now; mouse and take-up; reliability; MOOCS.
ICT characteristics: as a memory prosthetic; ubiquitous; revered time travel; disrupted highly redundant; very cheap; garage start-ups (HP) – which is mainly the point of massively reduced costs of entry now.
The educational opportunities:
- OERs especially software
- natural languages – points to the translation of MOOCs by volunteers including in minority languages
- visualisation of models and data
- wisdom of crowds – see the astromoly MOOC with volunteers discovering new stars/ planets
- Big data – in health, social data, physics
- Fast feedback
- Universal access – “to the blessings of knowledge”
The challenges are in: reliability; security; platform sustainability – most platforms we use now will probably not be here in ten years so need to design for platform independence); planned obsolescence; enquirer to alumnus (a single integrated student IT model); internal IS silos and appropriate assessment. Appropriate assessment is one of the larger challenges and innovation is needed here as traditional assessments are often inappropriate.
Implications for HE are varied: a squeezed middle model where MIT, Stanford will be OK as will Manchester Met as a local vocational HEI will also be OK. The top 100 will be OK. Student mobility, pick & mix and credit accumulation will be (finally) realised as a workable model. This has some interesting implications as Edinburgh perceived as the best University in the world for literature.
The assets of the University of Edinburgh: Informatics and high Performance Computing are key strengths; the University has won Two Queen’s Prizes both for e-learning (in teaching Vets and in teaching surgery both at a distance); Edina; Institute of Academic Development and the Global Academies; Information Services and leading in European provision of MOOCs.
Trends of changes:
- e-journals and e-books massive growth in both availability and use
- but also the number of library visits has increased (doubled in ten years)
- students now increasingly own a computer (99% now have their own).
Which suggests: more MOOCs; more online postgraduate programmes; more hybrid undergraduate programmes (eg, drawing on online resources including from MOOCs); advanced ICT partners; radical experiments; learning analytics is key along with innovation in assessment. Describes stupid schools as those that have not developed online programmes and/ or MOOCs. In terms of partnerships, the University needs to be selective and ask what is in it for us in terms of learning from partners. New Chairs in Learning Analytics and in Digital Education were confirmed.
Q: why use the term ‘disruption’
A: that conference organisers used contemporary business school jargon and prefers challenge and opportunities
Q: You’ve discussed how you cannot assume that the ICT incumbent is immune to these global changes so why apply that to universities?
A: in pre-MOOC world innovations were led by smaller niche universities but now what has changed is the scale and impact of MOOCs led by leading world universities. But no institution is safe and it is still the case that smaller institutions can generate ‘disruptive’ innovations. This is a reason for the need for radical experimentations.
We’re now moving to the keynote talk from Aleks Krotoski of a 30 minutes recoded presentation then she’ll join us for Q&A and a response from Chris Speed.
Asks why online information is rarely subjected to the critical thinking that other sources are subject to (journalism, politicians, teachers etc.). Technology is a cultural artefact created by people with particular interests, tools, at a specific place etc. so technology is also art.
So what is in the frame – taking from cinema – to create compelling story-telling but also leads to the question of what is outside the frame. The same is true of software but we lack a recognition of this or also how to question them.
Context is key: your perspective on the ideas about world depends on the context of when you receive the idea and so context cannot be taken account of by machines. Are we being manipulated by men behind the curtain
Tech is being developed on a wider societal and cultural context – see how computers replicate the office environment. Features of technology define what can and cannot be done with that technology.
Digital identity: how define being human. Many aspects of sense of self, names, user names and can this be translated into software. Digital identities are assigned to any ‘thing’ – a person, group etc.. and assumed to be either true or false. But identity changes in context and over time and this is difficult to capture in software. But defines the human online but also reflects biases of engineers in presenting us as us. Bie, google’s predictions based on algorithms depends on biases of the engineers and the results appear to be relevant but not necessarily so and presents outputs based on observed behaviours. It also assumes all sources of data are equal and that quantitative judgement are superior.
Facebook: social networks as platforms for self-expression and create online identities but how and what you can express is constrained, eg, by skills in photography and writing; categories of FB profile choices which are really based on FB needs for data for advertisers; you must use your real name so is an identity authenticator so cannot experiment with anonymous identities.
Life recognised by common ‘beats’: graduations/ coming of age etc. but can be very personal such as personal crises or fantastic experiences that fundamental changes – a life change. You’re not deleting your past but reconsidering it and re-visit those experience. But these artefacts of your past can be used against you? While people will recognise that people change, the web does not forget and treat each ‘beat’ as occurring now. Online does not allow or consider how we might change and develop as a person or even have died.
But this is a human not technological problem to be resolved by people when we assess online information – information should be assessed by people. We don’e acknowledge that online information is partial and limited.
Educators at the frontline of digital technology use: don’t assume students have the skills to use technology; don’t use systems you don’t understand; encourage the use of multiple personalities for social development; be critical of technology and the information from technology. Engineers/ developers may not have your best interests; demand software works to meet your needs not the other way round; avoid being constrained by technologies; consider the concerns and biases of the developers when using software.
Highlights how we’ve developed effective media literacy over 200+ years but seeing biases in software and platforms is harder for us to understand including within the algorithms. So what is valued by software may not be what we, the user, values. Discomforting experience of being online is often that software assumes an immutable, singular and quantifiable identity.
Now we’re moving to Chris’ response:
Chris describes self as a fine artist working in digital spaces but finds doing the ‘self stuff’ difficult. Presents a model showing four interpretations of one living room by different people so things like the sofa and TV changes in prominence and importance. There is no consensual space.
As part of an internet of things project various sensors have been placed in Chris’s house including in the toilet. Also disrupts the domestic setting due to reinterpreting spaces in terms of collecting data.
Aleks positions this work as reflecting on ourselves through data and quantified self. But why have you chosen to do this?
Chris: its part of an ESRC project on digital economy and looking at the thing as part of an experience. The artefact can be part of the ‘beats’ of life. If ‘things’ are contextual we should look at correlated data from multiple ‘things’ that better captures the interactions.
Aleks: can’t see the point of much of internet of things except on data capture on eg, resource use. What is the politics of these technologies
C: interested in the disruption of this experiment. Recognises some of the concerns but also wants his children to be lead-users
A: focus on children makes mistakes and should be allowed to make mistakes but what does making a mistake online mean if the web doesn’t forget?
Q: ppl have always left snapshots but now leaving many more and are searchable but we’ve always understood the limitations of interpretations and so could transfer that understanding that the artefact is not the person to the digital age.
A: the key point is that it is now searchable and so raises that question of techofundamentalism is that we don’t appear to recognise that technology is not neutral and don’t query where and how the information comes from.
Q: Zuckerberg has stated that privacy is dead but this is a normative statement, but is this possible?
A: no and Zuckerberg has created privacy around himself. To chnge attitudes and norms, there needs to be a lot more people saying the same thing – that privacy is dead – to change attitudes and behaviours of people.
Q: distinction between online and psychological identity – but both involve picking out from everyone else, in the former, by the etch and in the latter by the brain
A: people playing more with playing with sense of self online – could AI develop to the point that it could fool us in to thinking we were conversing with a person. But this is enormously complex and difficult. But people are getting closer, eg, sentiment analysis is slowly improving – combine AI and social science in a nexus that replicates an identity. But we don’t understand the brain and so difficult to reverse engineer. Also highlights that online identity is still some form of authentication of self.
Q: technology only cares about efficiency and that people are being taken over by a dictatorship of efficiency but the beats of life are not efficiency. Is it efficiency that disrupts our lives?
A: Great question! But social rituals can be a form of social efficiency. If we know someone is married that that signals that person has moved to a particular point in their life – interpretive efficiency – and so context specific. Although this is different from the quantitative basis of efficiency in software but how can software account for these softer notions of human efficiency.
…. just back from break.
Now up is Tim Fawns, e-learning coordinator for Clinical Psychology and is speaking on opportunities for deep reflection on collected data – and challenge the assertion that we don’t need to remember anything anymore.
Works on the notion of blended memory and that the external context and internal memory are co-dependent.
His research is on digital photography and memory as the practices and conventions on behaviours around photography are changing rapidly. Is talking today specifically on reflection in terms of linking with what we already know. Reflection takes time, energy and sustained attention.
Changes in photography have been rapid since 1990s and change to digital photography. By 2011 more photos were taken on mobile phones than stand-alone cameras.
We depend on photographs for our memory. Taking a photograph of an object impairs your memory of that object with looking at the photo. Does this matter? Well yes, if we don’t remember and reflect on events than we learn less from experiences.
From his research noted that people took a lot of photos of significant events and that people are not very selective as few photos were deleted even if very poor images. People take so many photos that it may detract from the experience as well as saturated with images. People rarely did anything with the photos unless being used for something specific – forming a slide show or sending to others.
Flickr was used for broadcast purpose and little concern with you was viewing these images. On FB people tended to sanitise their discourses around the photos as may be not certain who would and could view the images and discussion of them.
So we’ve ended up with more information than we can process. Photography has shifted from preserving the past for future remembering to recording the present and moving on.
Some similarities to other technologies, ie, broadcasting to Twitter and a compulsion to be aware of everything going on in a network and the fear of missing something. Also has 322 articles stored on Mendeley and collecting articles that will never be led. Suggests that the more PDFs collected leads to fewer being actually read.
Discusses different image projects and memory maps as ways of reflecting. In an educational perspective, he points to multimodal assessments and how different components interact to be greater than the sum of their parts.
Again, emphasis that the issues/ concerns with surface reflection from technology is not a result of the technology itself but is rather a cultural context towards the surface and individual choices.
Q: confused by the changes in the talk between describing what we’re doing and what we should be doing. Which were you describing?
A: Both – we can see evidence of better behaviours of more reflective use and discussion of artefacts but also can see many examples of surface and unreflective use of technologies.
Q: Reflecting on the quantified self trends and the creation of online data about ourselves and so wondered what the opportunities of technologies to support reflection?
A: as the tools approve, eg, facial recognition, tagging, you can start generating algorithmic analysis of your behaviours but the individual episodes remain the main point of interest.
Q: what might be the implications of technologies like blip-photo and snap-chat
A: these are interesting. Blip-photo is about recording one photo a day which is a strange way of recording a day. Snap chat as a response to privacy concerns but can promote more negative behaviours, ie, sexting.
Now moving on to James Fleck on innovation and IT Futures.
Passion has been on innovation and technology development and has recently retired form the OU Business School.
Is interested here in notions of innovation and disruption.
Innovation as how ideas become real- for practical purposes and having impact. Innovation has been a field of serious study for 40+ years but has been on the margins of academic departments but is now centre stage and everyone is piling in. But while new ideas are emerging but also the rigour may be being diluted, especially in the use of the term disruption as meaning any level of change. So would like to look at what is innovation and disruption.
Innovation involves many components including individual characteristics such as creativity and problem solving but does extend to national systems. Risk-taking seen as important but innovators tend not to be risk-takers but rather know that their idea is good and requires persistence and resilience. Not failures but trials.
Context is important and systematic understanding of the industrial and policy context linking to innovation.
What are the key ideas in applying innovation to ICT:
- incremental innovation: a linear model from invention to diffusion either as innovation push or market-led pull innovations. Used in consumer goods, car production, pharmaceuticals but not ICT
- In ICT innovations tend to be in configuration and innovation is bringing different components together in a new way, also practices around the technology
- mobile and platform technologies are a new categories. Points to the growth in mobile phone use across the world.
- disruptive innovation – from Schumpeter’s radical innovation and creative destruction. Also a sense of discontinuity combining new technologies and how these are received (in terms of configuration with culture and society) – Christensen – some technology innovations bring in new markets and user and push out the older technologies. So the real issue is how the technology interacts with the users, eg, from mainframes to PCs; HE and the OU?
– the electronic newspaper changed interaction with news journalism which has now been realised through citizen journalism
– discussed a contraceptive aid based on measuring hormones in urine was a failure but a success when marketed as an aid to fertility
– the OU has very good student experience feedback despite low number of full-time staff. But courses are designed collectively and tested with students and relies on tutor support as learning content is a commodity and easily accessible. OUBS also able to develop a practice route by delivering work-based learning offer. But OU is not disrupting the HE system but rather sustains the system. The key component here is the pedagogy rather than the technology.
Looking at MOOCs, the numbers of students are comparable to 19 century correspondence courses or the downloads from iTunesU. What is different is the involvement of prestigious institutions. The key question is where is the tutor interaction, eg, the pedagogy and the content is secondary.
The system of HE with pedagogy at the core, interacting with practice, technology, policy, students, staff etc… is relatively stable over time.
In conclusion, technology alone is not disruptive but the wider context. HE has a very stable ecology of stakeholders and so is more resistant to disruption. Asks the question of what HE is for and places the learning lower down – priorities are for social networking, moving to becoming an independent adult, finding a mate, etc.
Technology capacity for capturing and storing data is increasingly growing and allows increasing access to material – Galileo’s note books as high resolution images available to all. We are all potentially innovators.
Now time for lunch …
Back from lunch and the closing key note from Cory Doctorow
To start with a proposition that computers are everywhere and all things are computers. For example, the informatics building depends on computers and would not function as a building without computers, the same could be said for cars or a plane. And we increasingly put computers in our bodies, ie, cochlea implants but also personal music players … defribrilator implants also a computer.
Also, almost everything depends on computers for its productions.
We hear a lot about computer crime and failure. In part it is novelty, so of an interest in the way that clothes that criminals wear to commit their crimes are not interesting, So we hear a lot about regulating computers to fix their flaws and politicians use some heuristcs of where to apply regulations: (a) general technologies, eg, a wheel, are best not regulated; (b) if specific technologies can be subject to regulation so if we ban car drivers using mobile phones, the car continues to function as a car.
Computers are both general and specific and complex and have general properties that make them difficult to regulate.
Regulate the use of a computer by installing security software, DRM etc… but will allow a back door to over-ride such software (but assume that only the ‘good’ guys will use the back door) .
Describes the notion of Turing Completeness that designs a computer or language to be able to run any programme computer.
Need to recognise that where no demand, that regulations in computers then this will be worked around/ subverted by people, eg, DRM, mobile phone lock-in etc.. But is illegal to show how this is done but people will find ways to subvert these constraints.
Is currently discussing basics of cryptography and decrypting protected software as an illegal act. Cryptography used to force onto customers things that customers don’t want, eg, inability of DVDs to play in different regions, unskippable adverts (as the last place for unskippable adverts left). So these restrictions are key to business models. But also these restrictions constrain innovation – points to Open Software and Ubuntu as example of what innovations can happen when restrictions on adding feature and changes are removed.
Also, these constraints can be delivered as hidden software on computers that, eg, stop you ripping DVDs. But these are vulnerabilities to hackers and allow introduction of viruses.
Also, using laptop recovery software used in law enforcement to monitor people eg, suspects, school pupils etc…used by law enforcement but also by criminals.
So the idea of installing the back door in PCs is the wrong response to the problems with computers as such back doors/ hidden software encourages new crimes to be committed. So that computers are vulnerable and this represents a crucial threat to individual freedom.
What to do?
Learn how to encrypt your email and hard drives but you’re only as secure as the people you interact with.
But also we should insist that digital infrastructure and regulations are robust and effective in protecting us – by joining the Open Rights Group; Free Software Foundation; Electronic Frontier Foundation.