Tag Archives: technology

Theorising Technology in Digital Education

These are my notes taken during the presentation and then tidied up a bit in terms of spelling and sense – so they may well be limit, partial and mistaken!

Welcome from Sian Bayne with the drama of the day “fire! Toilets!” and confirmed that the event is being livestreamed and the video is available here.
Lesley Gourlay as chair for the day also welcomed participants from across the UK and  Copenhagen. Seeking to provide a forum for a more theorised and critical perspective technology in higher education in the SRHE (Society for Research in Higher Education). Prof Richard Edwards at the School of Education gained funding for international speakers for today’s events. Unfortunately Richard is ill and can’t be here.

The theme of the event is developing the theoretical, ethical, political and social analysis of digital technologies and shift away from an instrumentalist perspective. The event Twitter hashtag is #shre

The first presentation is by Alex Juhasz on distributed online FemTechNet. FemTechNet as a network does not often speak to be field of education so this is a welcome opportunity (she has also blogged on the event here)

FemTechNet is an active network of scholars, technologist and artists interested in technology and feminism. The network is focused on both the history and future of women in technology sectors and practices. FemTechNet is structured through committees and has a deep process-focused approach to its work that is in important in terms of feminist practices. Projects involve the production of a white paper, teaching and teaching practices, workshops, open office hours, co-teaching, etc. models the interaction of theory and practice. But it has been difficult to engage students in collaborative projects while staff/ professors are much more engaged. Town halls are events for collaborative discussion events with an upcoming event on Gamergate to include a teach-in. FemTechNet have also produced a ‘rocking’ manifesto as “feminist academic hacktivism” and “cyberfeminist praxis”.
FemTechNet values are made manifest in Distributed Open Collaborative Courses (DOCCs) themes on Dialogues on Feminism and Technology (2013) and Collaborations in Feminism and Technology (2014). DOCCs against the xMOOC model to promote a participative approach to course design and distributed approaches to collaboration. DOCC was labelled as the Feminist anti-MOOC based on deep feminist principles including wikistorming, and has much press and other interest, some positive and some ‘silly’ (Fox News). FemTechNet has lots of notes on using tools and teaching approaches that can be used across lots of different critical topics beyond feminism alone.
DOCCs are designed to be distributed and with a flatter hierarchy with less of a focus on massiveness. Using technology in an open way to co-create knowledge beyond transmission. More details on the DOCC as a learning commons vs course can be found here.
The FemTechNet commons is now housed and redesigned at the University of Michigan although this may be a way of universities avoiding Title 9 violations. But as a result, the newer commons has become less open and collaborative as an online space.
Much of FemTechNet work involved overcoming technological hurdles and was based on the unpaid work of members. FemTechNet engage with critique of lobour practices and contexts in higher education.
The DOCC networks involve a wide scope of different types of universities from Ivey League and community colleges and community organisations collaborately working.
Student numbers are fairly small with approx 200 students but very high completion rates and very positive feedback and evaluations. Between 2013-4 there was not really growth of scale partly due to limitations of infrastructure. Now with the support of University of Michigan, there is an increased aspiration to develop international collaborative work.
DOCCs involve networking courses from many different fields of study involving both on-campus to fully online courses. Basic components of courses are keynote dialogue videos, smaller keywords dialogues and five shared learning activities. See also the situated knowledge map [link]. There is a big emphasis on share resources, cross-displinarity and inter-institutional working and learning.
So while DOCCs emerged from a feminist network, the tools, models and approaches can be used in many subject areas.

After lunch

Ben Williamson is prsenting on Calculating Academics: theorising the algorithmic organisationan of the digital university. The open slide isof a conceptualisation of a digital university university that can react to data and information that it receives. Ben will be prsenting on a shift t understanding of the university as mediated by the digital and focus on the role of algorithms.
One of the major terms being used is in terms of the smart university based on big data to enhance teaching, engagement, research, enterprise to optimise and utilise the data universities generate. This turn is situation in the wider concept of ‘smart cities’.
Smart cities are ‘fabricated spaces’ that are imaginary and unrealised and perhaps unrealisable. Fabricated spaces serve as models to aspire to realise.
Smart universities are fabricated through
technical devicecs, softre, code,
social actors including software producers, government and
discourses of text and materials.
Algorithm is seen in compsci as a set of processes to produce a desired output. But algorithms are black boxed hidden in IP and impenetrable code. It is also hidden in wider heterogeneous systems involving languages, regulation and law, standards etc.
Also algorithms emerge and change overtime and are, to an extent, out of comtrol, and are complex and emergent.
Socio-algorithmic relationality as algorithms co-constitute social practice (Bucher 2012); generate patterns, order and coordination (mackenzie 2006) and are social products of specific political, social and cultureal contexts and go on to produce by temselves.
Involve translation of human action through mathematical logics (Neyland 2014). Gillespie (2014) argues for a sociological analysis of algorithms as social, poitical as well as technical accomplishments.
Algorithms offer (Gillespie 2014): technical solutions; as synedoche – an abbreviation for a much wider socio-technical system; as stand-in for something else around corporate ownership for example; commitment to procedure as they privilige qualitification and proceduralisation.
Big data and the smart university is a problem area in this idea of the smart university. Is there a different epistemology for big data. Big data cannot exist without algorithms and has generated a number of discourses. Wired mag has suggested that big data is leading to the end of theory as there is no need to create a hypothesis as big data will locate patterns and results and this is a challenge to traditional academic practice. Also there is the rise of commercial social science such as the Facebook social science team often linked to nudging behaviours and “engineering the public” (Tufecki 2014). This is replicated in policy development such as the centre for analysis of social media at Demos using new big data sets. We’re also seeing new academic initiatives such as social physics at MIT and building a predictive model of human behaviour. Also see MIT laboratory for social machines in partnership with Twitter.
This raises the question of what expertise is being harnessed for smarter universities. Points ot the rise of alternative centres of expertise that can conduct big data analysis that are labelled as algorithmist Mayer0Schonberger and Cukier. Such skills and interdisciplinarity does not fit well in university. Sees the rise of non-sociologist sociologists doing better social research?
Mayer0Schonberger and Cukier Learning with Big data – predictive learning analytics, new learning platforms, et.\c. that is reflected in the discourses on the smarter university. Bid data generates the university in immediate and real time- doesn’t have to wait for assessment returns. See for example, IBM education for a smarter planet focused on smarter and prescriptive analytics based on big data.
Knewton talks of inferred student data that suggests the algorithm is objective and consistent. But as Seaver (2014) points out, these algorithms are created and changed through ‘human hands’.
So we’re seeing a big data epistemology that uses statistics that explain and predict human behaviour (Kitchin 2014): algorithms can find patterns where science cannot that you don’t need subject knowledge to understand the data. But he goes on that this is based on fallacies of big data- big data is partial, based on samples, what analysis is selected, what data is or can be captured. Jurgenson (2014) also argues for the understanding of the wider socio-economic networks that create the algorithms – the capture of data points is governed by political choices.
How assumptions of bid=g data are influenceing academic research practices. Increasingly algor entwinned in knowledge production when working with data – sch as Nvivo, SPSS, google scholar – Beer 2012 – algorthimic creation of social knowledge.Also seeing the emergence of digital social research around big data and social media. eg social software studies initiative – soc sci increasingy dep on digital infrrastructure not of our making.
Noortje Marres rethink social research as distributed and share accomplishment involving human and non-human.
In turn influences academic self-assessment and identity through snowball metrics on citation scores, researchfish etc. translating academic work in to metrics. See Eysenback (2011) study linking Tweets and rates of citation. So academics are subject to increasing quantified control mediated through software and algorithms. Seeing the emergence of the quantified academi self. Yet academics are socialised and by these social media networks that exacerbtes this e-surviellance (Lupton 2014). While share research develops its own lib=vely social life outside of the originator’s control.
Hall (2013) points to new epistemic environment that academics are being more social (media) entrepreneurial. Lyotard (1979) points to the importance and constraints of computerisation of research.
Finish with Q
– how do cog based classrooms learn?
–  what data is collected to teach?
– should academics learn to code?

A lot of discssion on the last question. It was also pointed out that its not asked should coders learn to be sociologists?
Also pointed out that people demonstrate the importanc of embodoed experiences through protests, demonstrations, that reflects what is loss in the turn to data.

After a short break, we now have Norm Friesen on Education Technology or Education as always-already Technological”. Talking about educational technology as not new but as going through a series of entwinements over time. Norm will look at older technologies of the text book and the lecture as looking back at older recognisable forms.
Looking back we can argue that educational technologies now are not presenting particularly novel problems for higher education. Rather higher education has always been constituative with educational practices then we can see how practices can adapt to newer technologies now.
Tech in education have always been about inscription, symbols as well as performance. If we understand the university as a discourse networks – see Kipler’s discourse network in analysis of publishing in 19 Century. Institutions like universities are closely linked to technology in storing and using technologies and modifying technologies for their practices.
In the example of tablets going back to ancient times or the horn book or other forms that is tigtly couple with institutions of learning and education. Such as clay tablets dating back to 2500 – 2000 BCE that show student work and teacher corrects as symbolic inscriptions of teaching and learning practices. And such tablets work at the scale of individual student work or as larger epic literatures. Can see a continued institution symbolic practices through to the iPad. Here technologies may include epistemic technologies such as knowledge of multiplication tables, procedures of a lecture – technologies as a means ot an end – so technologies are ‘cultural techniques’.

For the rest of the presentation will focus on the textbook and lecture as technologies that are particularly under attack in the revisioning of the university. Ideas of the fipped classroom still priviliges the lecture through video capture. Similarly the text book has yet to be overtaken by the e-textbook. Both provide continuities fromover 800 years of practice and performance.
The lecture goes back to the earliest university as originally to recite a text, so for transmission rather than generation of knowledge with a focus on the retention of knowledge. Developing ones own ideas in a lecture was unknown and student work involved extensive note taking from oral teaching (see Blair 2008). The lecture is about textual reproduction. Even following the printing press, this lecture practice continued although slowly, the lecturers own commentary on the text was introduced manifested as interlines between lines written from the dictated text. Educational practice tended to  not change as rapidly as the technologies of printing such that education was about 100 years behind.
But in 1800  saw the first lectures only from the lecturers own notes. so the lecture was recast around the individual as the creator of knowledge. So the individual lecturer and student not the official text became the authoritative sources of knowledge. Also the notion of the performance becomes increasingly important in the procedure of the lecture.
In textbooks we see pedagogical practice embedded in the text as end of chapter questions for the student to reflect and respond to (the Pestalozzian method, 1863). This approach can be seen in Vygotsky, Mead and self-regulated learning.
Specific technological configurations supported the increased emphasis on performance such as podcasting, powerPoint, projectors, etc. (see TED talks).
In the text book, similar innovations are happening in term sof layout, multimedia, personalised questioning (using algorithms). The text book becomes an interactional experience but continue from much older forms of the textbook. What is central is what are the familiar forms – that underlying structures have persisted.

But it is also the case that lectures nolonger espouse their own theories, they do not create new knowedge in the lecture.

What is wrong with ‘Technology Enhanced Learning’

Last Friday I attended a Digital Cultures & Education research group presentation by Sian Bayne on her recent article What’s the matter with ‘Technology Enhanced Learning’?

These are my notes taken during the presentation and then tidied up later – so they may well be limit, partial and mistaken!

16th_century_French_cypher_machine_in_the_shape_of_a_book_with_arms_of_Henri_II.

16th century French cypher machine in the shape of a book with arms of Henri II. Image from Uploadalt

While Technology Enhanced Learning (TEL) is a widely used term in the UK and Europe, the presentation positions TEL as an essentially conservative term that discursively limits what we do as researchers and researchers in the field of digital education and learning. Sian’s critique draws on three theoretical perspectives:

* Science & Technology Studies (STS) for a critique of ‘Technology
* Critical posthumanism for a critique of ‘Enhancement
* Gert Biesta’s language of learning for ‘Learning

For Technology, we dont tend to define it but rather black box it as unproblematically in service to teaching practices. This black-boxing of technology as supporting learning and teaching creates a barrier between the technology and the social practices of teaching. As Hamilton & Friesen discuss, two main perspectives on technology as either as an essentialist perspective of unalienable qualities of the technologies or we treat it instrumentally as a neutral set of tools. I both cases technology is understood as being independent of the social context in which it is used. Hamilton & Friesen argue we need to take a more critical stance especially in terms of technology as the operationalisation of values and to engage in larger issues such as social justice, the speed of change and globalisation, the nature of learning or what it is to be human.

By using the term, Enhanced, TEL adopts a conservative discourse as it assumes there is no need to radically rethink teaching & learning practices but just a need to enhance of tinker with existing practice. So enhancement aligns with Transhumanism – a humanist philosophy of rationality and human perfectibility where technological advances remove the limitations of being human (Bostrom 2005)
Critical post-humanism (Simon 2003) is a philosophical critique of the humanism of the Enlightenment and its assumptions on human nature and the emphasis on human rationality. arguing that these assumptions are complicit in dominatory practices of opporession and control. The human being is just one component in complex ecology of practice that also includes machines, non-human components in symmetry. So post-humanism is more about humility and appreciation of that our involvement as humans in our context is complex and inter-related and interactional. Yet TEL buys into a dominant Transhumanism emphasising the cognitive enhancement of the mind and so could include the use of drugs as a ’technology’ to enhance learning. The Technology Enhanced Learning System Upgrade report.
Transhumanism positions technology as an object acted on by human subject so ignoring how humans are shaped by and shape technology and does not ask Is ‘enhancement’ good, who benefits from enhancement and is enhancing is context specific? It is argued that TEL could learn from the post humanist critique of Transhumanism

The ‘problem’ of Learning draws on Gert Biesta’s writing on the new language of learning and more specifically, the ‘learnification’ of discourses of education. This involves talking about “learning” rather than “teaching”, or “education”. Learning as a terms is used as a proxy for education that takes discussions away from considerations of structures of power in education itself. So learnification discursively instrumentalises education – education is provided/ delivered to learners based on predefined needs rather than needs emerging and evolving over time. So learners are positioned as customers or clients of education ‘providers’ and TEL gets bound up with this neo-liberal discourse/ perspective

So the label of TEL tacitly subordinates social practice to technology while also ontologically separating the human from the non-human. The TEL discourse is aligned with broader enhancement discourse that enrols transhumanism and instrumentalisation so entrenching a particular view of the relationships between education, learning and technology.

Rather, education technologies involve complex assemblages of human and non-human components and as practitioners and researcher, we need to embrace that complexity. Posthumanism as a stance, is a way of doing this and understanding learning as an emergent property of complex and fluid networks of human and non-human elements coming together. In posthumanism, the human is not an essence but rather a moment.

IT Futures Conference – Disruption

Here’s my attempt at live blogging the IT Futures conference at the University of Edinburgh IT Futures conference on the theme of Disruption. The hashtag for the conference is #itfutures

The conference is starting with an address from the Principal, Sir Tim O’Shea on disruptions, predictions and surprises and the need for systematic thinking especially on what really is surprising in teaching, learning and research activities. He is largely to talk about the student experience but points to the importance of IT for research activities is also important, and pointed to the use of computational modelling in the recent chemistry Nobel prizes.

Disruptions described as ‘the pretentious bit’ and lists as disruptions: nouns and verbs; tilling and fire; writing and printing; machines; engines and electricity; telegraph/ phone/ vision and then computers. Notes that the telegraph was hugely disruptive to diplomacy and the role of the ambassador by allowing leader to ‘talk’ direct to leader.

Describes a computer as an amplifier of cognitive abilities. The question is whether MOOCs are disrupters of HE? Reflects that the printing press and the OU did not fundamentally disrupt the lecture-led HE model. So large changes can still be non-disruptive.

The major predictions of:

  • Moore’s law that the power of computers will double every 18 months and will stay true for another 8 years;
  • Metcalf’s law predicted that the internet will ‘fall over’ early 2000’s due to volume of traffic proved not to be true
  • Bayes’ law on probability
  • Semantic networks predicted from 1960s so Google should not be described as surprising
  • Cloud – first described in 1960s as software as a service
  • Intelligent Tutors – look to 1962 for first description of an intelligent tutor.

Minor predictions such as the iPad as a personal portable device along with ICT integration (iPhone); robots; videophone; personalised instruction; cybernetics; and speech recognition predicted decades ago.

So what are the big surprises?

  • that Moore’s law is true and Metcale’s law is still false (due to redundancy in the system)
  • Facebook and Twitter
  • Google Translate using Bayes’ Law
  • Very personal computers
  • Netscape business model – give the product away for free and work out monetisation later.

Smaller surprises include the World Wide Web; Third World take-up; face recognition now; mouse and take-up; reliability; MOOCS.

ICT characteristics: as a memory prosthetic; ubiquitous; revered time travel; disrupted highly redundant; very cheap; garage start-ups (HP) – which is mainly the point of massively reduced costs of entry now.

The educational opportunities:

  • OERs especially software
  • natural languages – points to the translation of MOOCs by volunteers including in minority languages
  • visualisation of models and data
  • wisdom of crowds – see the astromoly MOOC with volunteers discovering new stars/ planets
  • Big data – in health, social data, physics
  • Fast feedback
  • Universal access – “to the blessings of knowledge”

The challenges are in: reliability; security; platform sustainability – most platforms we use now will probably not be here in ten years so need to design for platform independence); planned obsolescence; enquirer to alumnus (a single integrated student IT model); internal IS silos and appropriate assessment. Appropriate assessment is one of the larger challenges and innovation is needed here as traditional assessments are often inappropriate.

Implications for HE are varied: a squeezed middle model where MIT, Stanford will be OK as will Manchester Met as a local vocational HEI will also be OK. The top 100 will be OK. Student mobility, pick & mix and credit accumulation will be (finally) realised as a workable model. This has some interesting implications as Edinburgh perceived as the best University in the world for literature.

The assets of the University of Edinburgh: Informatics and high Performance Computing are key strengths; the University has won Two Queen’s Prizes both for e-learning (in teaching Vets and in teaching surgery both at a distance); Edina; Institute of Academic Development and the Global Academies; Information Services and leading in European provision of MOOCs.

Trends of changes:

  • e-journals and e-books massive growth in both availability and use
  • but also the number of library visits has increased (doubled in ten years)
  • students now increasingly own a computer (99% now have their own).

Which suggests: more MOOCs; more online postgraduate programmes; more hybrid undergraduate programmes (eg, drawing on online resources including from MOOCs); advanced ICT partners; radical experiments; learning analytics is key along with innovation in assessment. Describes stupid schools as those that have not developed online programmes and/ or MOOCs. In terms of partnerships, the University needs to be selective and ask what is in it for us in terms of learning from partners. New Chairs in Learning Analytics and in Digital Education were confirmed.

Q&A

Q: why use the term ‘disruption’

A: that conference organisers used contemporary business school jargon and prefers challenge and opportunities

Q: You’ve discussed how you cannot assume that the ICT incumbent is immune to these global changes so why apply that to universities?

A: in pre-MOOC world innovations were led by smaller niche universities but now what has changed is the scale and impact of MOOCs led by leading world universities. But no institution is safe and it is still the case that smaller institutions can generate ‘disruptive’ innovations. This is a reason for the need for radical experimentations.

We’re now moving to the keynote talk from Aleks Krotoski of a 30 minutes recoded presentation then she’ll join us for Q&A and a response from Chris Speed.

Asks why online information is rarely subjected to the critical thinking that other sources are subject to (journalism, politicians, teachers etc.). Technology is a cultural artefact created by people with particular interests, tools, at a specific place etc. so technology is also art.

So what is in the frame – taking from cinema – to create compelling story-telling but also leads to the question of what is outside the frame. The same is true of software but we lack a recognition of this or also how to question them.

Context is key: your perspective on the ideas about world depends on the context of when you receive the idea and so context cannot be taken account of by machines. Are we being manipulated by men behind the curtain

Tech is being developed on a wider societal and cultural context – see how computers replicate the office environment. Features of technology define what can and cannot be done with that technology.

Digital identity: how define being human. Many aspects of sense of self, names, user names and can this be translated into software. Digital identities are assigned to any ‘thing’ – a person, group etc.. and assumed to be either true or false. But identity changes in context and over time and this is difficult to capture in software. But defines the human online but also reflects biases of engineers in presenting us as us. Bie, google’s predictions based on algorithms depends on biases of the engineers and the results appear to be relevant but not necessarily so and presents outputs based on observed behaviours. It also assumes all sources of data are equal and that quantitative judgement are superior.

Facebook: social networks as platforms for self-expression and create online identities but how and what you can express is constrained, eg, by skills in photography and writing; categories of FB profile choices which are really based on FB needs for data for advertisers; you must use your real name so is an identity authenticator so cannot experiment with anonymous identities.

Life recognised by common ‘beats’: graduations/ coming of age etc. but can be very personal such as personal crises or fantastic experiences that fundamental changes  – a life change. You’re not deleting your past but reconsidering it and re-visit those experience. But these artefacts of your past can be used against you? While people will recognise that people change, the web does not forget and treat each ‘beat’ as occurring now. Online does not allow or consider how we might change and develop as a person or even have died.

But this is a human not technological problem to be resolved by people when we assess online information – information should be assessed by people. We don’e acknowledge that online information is partial and limited.

Educators at the frontline of digital technology use: don’t assume students have the skills to use technology; don’t use systems you don’t understand; encourage the use of multiple personalities for social development; be critical of technology and the information from technology. Engineers/ developers may not have your best interests; demand software works to meet your needs not the other way round; avoid being constrained by technologies; consider the concerns and biases of the developers when using software.

Highlights how we’ve developed effective media literacy over 200+ years but seeing biases in software and platforms is harder for us to understand including within the algorithms. So what is valued by software may not be what we, the user, values. Discomforting experience of being online is often that software assumes an immutable, singular and quantifiable identity.

Now we’re moving to Chris’ response:

Chris describes self as a fine artist working in digital spaces but finds doing the ‘self stuff’ difficult. Presents a model showing four interpretations of one living room by different people so things like the sofa and TV changes in prominence and importance. There is no consensual space.

As part of an internet of things project various sensors have been placed in Chris’s house including in the toilet. Also disrupts the domestic setting due to reinterpreting spaces in terms of collecting data.

Aleks positions this work as reflecting on ourselves through data and quantified self. But why have you chosen to do this?

Chris: its part of an ESRC project on digital economy and looking at the thing as part of an experience. The artefact can be part of the ‘beats’ of life. If ‘things’ are contextual we should look at correlated data from multiple ‘things’ that better captures the interactions.

Aleks: can’t see the point of much of internet of things except on data capture on eg, resource use. What is the politics of these technologies

C: interested in the disruption of this experiment. Recognises some of the concerns but also wants his children to be lead-users

A: focus on children makes mistakes and should be allowed to make mistakes but what does making a mistake online mean if the web doesn’t forget?

Q&A

Q: ppl have always left snapshots but now leaving many more and are searchable but we’ve always understood the limitations of interpretations and so could transfer that understanding that the artefact is not the person to the digital age.

A: the key point is that it is now searchable and so raises that question of techofundamentalism  is that we don’t appear to recognise that technology is not neutral and don’t query where and how the information comes from.

Q: Zuckerberg has stated that privacy is dead but this is a normative statement, but is this possible?

A: no and Zuckerberg has created privacy around himself. To chnge attitudes and norms, there needs to be a lot more people saying the same thing – that privacy is dead – to change attitudes and behaviours of people.

Q: distinction between online and psychological identity – but both involve picking out from everyone else, in the former, by the etch and in the latter by the brain

A: people playing more with playing with sense of self online – could AI develop to the point that it could fool us in to thinking we were conversing with a person. But this is enormously complex and difficult. But people are getting closer, eg, sentiment analysis is slowly improving – combine AI and social science in a nexus that replicates an identity. But we don’t understand the brain and so difficult to reverse engineer. Also highlights that online identity is still some form of authentication of self.

Q: technology only cares about efficiency and that people are being taken over by a dictatorship of efficiency but the beats of life are not efficiency. Is it efficiency that disrupts our lives?

A: Great question! But social rituals can be a form of social efficiency. If we know someone is married that that signals that person has moved to a particular point in their life – interpretive efficiency – and so context specific. Although this is different from the quantitative basis of efficiency in software but how can software account for these softer notions of human efficiency.

 

…. just back from break.

Now up is Tim Fawns, e-learning coordinator for Clinical Psychology and is speaking on opportunities for deep reflection on collected data – and challenge the assertion that we don’t need to remember anything anymore.

Works on the notion of blended memory and that the external context and internal memory are co-dependent.

His research is on digital photography and memory as the practices and conventions on behaviours around photography are changing rapidly. Is talking today specifically on reflection in terms of linking with what we already know. Reflection takes time, energy and sustained attention.

Changes in photography have been rapid since 1990s and change to digital photography. By 2011 more photos were taken on mobile phones than stand-alone cameras.

We depend on photographs for our memory. Taking a photograph of an object impairs your memory of that object with looking at the photo. Does this matter? Well yes, if we don’t remember and reflect on events than we learn less  from experiences.

From his research noted that people took a lot of photos of significant events and that people are not very selective as few photos were deleted even if very poor images. People take so many photos that it may detract from the experience as well as saturated with images. People rarely did anything with the photos unless being used for something specific – forming a slide show or sending to others.

Flickr was used for broadcast purpose and little concern with you was viewing these images. On FB people tended to sanitise their discourses around the photos as may be not certain who would and could view the images and discussion of them.

So we’ve ended up with more information than we can process. Photography has shifted from preserving the past for future remembering to recording the present and moving on.

Some similarities to other technologies, ie, broadcasting to Twitter and a compulsion to be aware of everything going on in a network and the fear of missing something. Also has 322 articles stored on Mendeley and collecting articles that will never be led. Suggests that the more PDFs collected leads to fewer being actually read.

Discusses different image projects and memory maps as ways of reflecting. In an educational perspective, he points to multimodal assessments and how different components interact to be greater than the sum of their parts.

Again, emphasis that the issues/ concerns with surface reflection from technology is not a result of the technology itself but is rather a cultural context towards the surface and individual choices.

Q: confused by the changes in the talk between describing what we’re doing and what we should be doing. Which were you describing?

A: Both – we can see evidence of better behaviours of more reflective use and discussion of artefacts but also can see many examples of surface and unreflective use of technologies.

Q:  Reflecting on the quantified self trends and the creation of online data about ourselves and so wondered what the opportunities of technologies to support reflection?

A: as the tools approve, eg, facial recognition, tagging, you can start generating algorithmic analysis of your behaviours but the individual episodes remain the main point of interest.

Q: what might be the implications of technologies like blip-photo and snap-chat

A: these are interesting. Blip-photo is about recording one photo a day which is a strange way of recording a day. Snap chat as a response to privacy concerns but can promote  more negative behaviours, ie, sexting.

 

Now moving on to James Fleck on innovation and IT Futures.

Passion has been on innovation and technology development and has recently retired form the OU Business School.

Is interested here in notions of innovation and disruption.

Innovation as how ideas become real- for practical purposes and having impact. Innovation has been a field of serious study for 40+ years but has been on the margins of academic departments but is now centre stage and everyone is piling in. But while new ideas are emerging but also the rigour may be being diluted, especially in the use of the term disruption as meaning any level of change. So would like to look at what is innovation and disruption.

Innovation involves many components including individual characteristics such as creativity and problem solving but does extend to national systems. Risk-taking seen as important but innovators tend not to be risk-takers but rather know that their idea is good and requires persistence and resilience. Not failures but trials.

Context is important and systematic understanding of the industrial and policy context linking to innovation.

What are the key ideas in applying innovation to ICT:

  • incremental innovation: a linear model from invention to diffusion either as innovation push or market-led pull innovations. Used in consumer goods, car production, pharmaceuticals but not ICT
  • In ICT innovations tend to be in configuration and innovation is bringing different components together in a new way, also practices around the technology
  • mobile and platform technologies are a new categories. Points to the growth in mobile phone use across the world.
  • disruptive innovation – from Schumpeter’s radical innovation and creative destruction. Also a sense of discontinuity combining new technologies and how these are received (in terms of configuration with culture and society) – Christensen – some technology innovations bring in new markets and user and push out the older technologies. So the real issue is how the technology interacts with the users, eg, from mainframes to PCs; HE and the OU?

Examples:

– the electronic newspaper changed interaction with news journalism which has now been realised through citizen journalism

– discussed a contraceptive aid based on measuring hormones in urine was a failure but a success when marketed as an aid to fertility

– the OU has very good student experience feedback despite low number of full-time staff. But courses are designed collectively and tested with students and relies on tutor support as learning content is a commodity and easily accessible. OUBS also able to develop a practice route by delivering work-based learning offer. But OU is not disrupting the HE system but rather sustains the system. The key component here is the pedagogy rather than the technology.

Looking at MOOCs, the numbers of students are comparable to 19 century correspondence courses or the downloads from iTunesU. What is different is the involvement of prestigious institutions. The key question is where is the tutor interaction, eg, the pedagogy and the content is secondary.

The system of HE with pedagogy at the core, interacting with practice, technology, policy, students, staff etc… is relatively stable over time.

In conclusion, technology alone is not disruptive but the wider context. HE has a very stable ecology of stakeholders and so is more resistant to disruption. Asks the question of what HE is for and places the learning lower down – priorities are for social networking, moving to becoming an independent adult, finding a mate, etc.

Technology capacity for capturing and storing data is increasingly growing and allows increasing access to material – Galileo’s note books as high resolution images available to all. We are all potentially innovators.

 

Now time for lunch …

 

Back from lunch and the closing key note from Cory Doctorow 

To start with a proposition that computers are everywhere and all things are computers. For example, the informatics building depends on computers and would not function as a building without computers, the same could be said for cars or a plane. And we increasingly put computers in our bodies, ie, cochlea implants but also personal music players … defribrilator implants also a computer.

Also, almost everything depends on computers for its productions.

We hear a lot about computer crime and failure. In part it is novelty, so of an interest in the way that clothes that criminals wear to commit their crimes are not interesting, So we hear a lot about regulating computers to fix their flaws and politicians use some heuristcs of where to apply regulations: (a) general technologies, eg, a wheel, are best not regulated; (b) if specific technologies can be subject to regulation so if we ban car drivers using mobile phones, the car continues to function as a car.

Computers are both general and specific and complex and have general properties that make them difficult to regulate.

Regulate the use of a computer by installing security software, DRM etc… but will allow a back door to  over-ride such software (but assume that only the ‘good’ guys will use the back door) .

Describes the notion of Turing Completeness that designs a computer or language to be able to run any programme computer.

Need to recognise that where no demand, that regulations in computers then this will be worked around/ subverted by people, eg, DRM, mobile phone lock-in etc.. But is illegal to show how this is done but people will find ways to subvert these constraints.

Is currently discussing basics of cryptography  and decrypting protected software as an illegal act. Cryptography used to force onto customers things that customers don’t want, eg, inability of DVDs to play in different regions, unskippable adverts (as the last place for unskippable adverts left). So these restrictions are key to business models. But also these restrictions constrain innovation – points to Open Software and Ubuntu as example of what innovations can happen when restrictions on adding feature and changes are removed.

Also, these constraints can be delivered as hidden software on computers that, eg, stop you ripping DVDs. But these are vulnerabilities to hackers and allow introduction of viruses.

Also, using laptop recovery  software used in law enforcement to monitor people eg, suspects, school pupils etc…used by law enforcement but also by criminals.

So the idea of installing the back door in PCs is the wrong response to the problems with computers as such back doors/ hidden software encourages new crimes to be committed. So that computers are vulnerable and this represents a crucial threat to individual freedom.

What to do?

Learn how to encrypt your email and hard drives but you’re only as secure as the people you interact with.

But also we should insist that digital infrastructure and regulations are robust and effective in protecting us – by joining the Open Rights Group; Free Software Foundation; Electronic Frontier Foundation.

Learning innovations and digital education

An interesting report on Technology Enhanced Learning (TEL) from Open University based academics. The report discusses:
1. what is TEL but in terms of technologies “add value to” (enhancing) teaching and learning rather than being indivisible from or enmeshed in teaching and learning. Can you imagine teaching and learning without any technologies (digital of otherwise)? This section does include some useful references to the European and UK policy frameworks including networks such as STELLAR. The framing of education in terms of being a service, as media production and broadcasting (xMOOC?) or as a conversation is useful. The discussion of the education system as being stable and acting as a ‘constraint’ on digital education innovations is also useful – that the education system is the more powerful network and slower to transform which affects what is possible in terms of digital-led innovations in education. So analysis of innovations in digital education should be framed by an understanding that:
New technologies follow complex trajectories often supported or thwarted by other technologies, infrastructural issues, competing standards, social systems, political decisions, and customer demands. [p17].
The report goes on to note that the web was started at CERN as a tool for learning through information sharing. The emphasis here is on innovation occurring within contexts of communities, practices as well as technologies. The discussion of success stories includes mobile learning pointing to the MOBilearn project supported by the European Commission as well as the BBC’s Janala language learning service but doesn’t really discuss the growth of smart phones and tablets as means of going online. In effect, learning technology design needs to be responsive to the requirements of these devices. Other success stories cited include Scratch and xDelia.
In examining the situation for research and innovation in digital education, the report points to certain disadvantages compared to other ‘scientific’ areas in terms of the coherence of the research agenda and the lack of a single focal point for innovation such as a single technological solution. The report notes the difficulties of creating a compelling narrative around how technologies are used to enhance learning. The report notes that: there is a need to reassess the use of computer technology from an educational, rather than a technological, perspective; and develop a more sophisticated conceptual model of how ICT can facilitate teaching and learning in the classroom..[p23]. The recommendations on experimenting in how technologies can be used to enhance informal learning (in the corporate sector), in ensuring research findings are made available inside and outside HE and that research is increasingly undertaken as applied research (mode 2 knowledge production) are welcome.
The section on the innovation process in TEL positions innovations involving pedagogy and technology combining in to emergent practices supported by communities of practitioners operating within wider sectoral ecologies and contexts. Given the emphasis on practice and complexity, the report finds TEL innovations depend on innovators as bricoleurs as someone who makes do with whatever is at hand. However, successful innovations depend on bricolage that also takes the wider learning complex into account and where innovations can take decades to diffuse fully. The report goes on to promote a design based approach to research and evidence-based innovation.
While making a number of recommendations for researchers and [research] policy-makers, the report concludes The focus for future TEL research should be on effective transformation of educational practices, rather than small incremental improvements.

Connected & networked higher education

I was interested to read the Connected Learning Environments paper over at Educause. The briefing looks at connected learning environments in higher education and states that:

While e-learning often connotes delivery of information in a sequential, linear fashion, the connected learning environment is integrative, personalized, interconnected, and authentic. Across higher education, leaders and learners are taking note of this evolution in education.

Such environments have the characteristics of (a) a seamless integration with student support services including careers services. This appears to emphasise a function to supporting the student in identifying their own curricula and linking their longer-term goals with module and programme learning outcomes and so may well be a re-articulation of attempts at common credit accumulation and transfer schemes; (b) personalised learning helping students engage with the best learning opportunities through competency based education and (c) authentic learning experiences linking students to research academics, employers, communities etc. in addressing real world problems.

The briefing seems to buy in to the broader discourse of a need to transform or disrupt higher education by breaking down/ permeating institutional boundaries enabling students to study across different institutions and engage in learning through multiple stakeholders.

The briefing does include various examples of elements the connected learning environment being delivered by different institutions which is useful albeit USA-centric.

Pearson’s acquisition of Learning Catalytics

A short post on the acquisition of Learning Catalytics by Pearson as discussed here. Note the quote from Paul Corey, president of science, business, and technology at Pearson:

“With Learning Catalytics we felt we could accomplish three things: help faculty turn the classroom into an immediate engaging environment while measuring feedback without interrupting flow, help students learn from each other, and help foster higher thinking skills.”

As Pearson really focuses on education technology and consumer devices, it will be interesting to watch whether its publishing arm develops in terms of full-package course designs. If the predictions on the squeezed middle in higher education pans out, is Pearson positioning itself for commercial delivery of courses designed by ‘elite’ universities supported by tutor support delivered by local colleges? In other words, removing non-‘elite’ universities from course and programme design and development and into a ‘customer’ service role.

Sketchboarding an online course

I’m in the process of reviewing and redesigning an online course. So to get a full visual view of the course, I’ve been experimenting with sketchboarding. The slideshow shows the board as it is the basic design goes up, currently using a temporal frame for the 10 or so weeks course duration. Its clearly far from finished but is proving a valuable way of think through and collaborating on the design. It is also still to fully develop as a sketchboard rather than an ideas board. Once completed, I’ll be building this is Moodle and (possibly) buddypress.

Created with flickr slideshow.

Digital Scholarship: day of ideas 2

I’m listening now to Tara McPherson on humanities research in a networked world as the opening session of the Digital Scholarship day of ideas. (I’ve started late due to a change in the start-time).

Discussing how large data sets can be presented in a variety of interfaces: for schools; researchers; publishers and only now beginning to realise the variety of modes of presenting information across all discipline areas. But humanities scholars are not trained in tool building but should engage in that tool building drawing on their historic work on text, embodiment etc. and points to working with artistis on such interpretive tool building – see Mukurtu an archive platform design by an anthropogist based on work with indigenous people in Australia. Tools allow indigenous people to control access to knowledge according to their knowledge exchange protocols.

Open ended group create immersive 3D spaces but is not designed to be realistic but engaging. More usually found in an experimental art gallery. Also showing an example of a project of audio recordings of interviews with drug users at a needle exchanges.

Vectors is a journal examining these sorts of interactive and immersive experiences and research. Involves ‘papers’ that interact, mutate and change which challenges the notion of scholarship as stable. Interactive experiences are developed in collaboration with scholars in  a long iterative process that is not particularly scaleable.

The develop of a tool-building process was a reaction on problematising interaction with data-sets. Example of HyperCities extending google maps across space and time.

The Alliance for networking Visual Culture including universities and publishers working together, reconsider scales of scholarship and using material from visual archives. Process starts with the development of prototypes. Scalar emerged from Vectors work as a publishing platform for scholars using visual materials. Allows scholars to explore multiple views of visual materials linked to archives and associated online materials linked to critical commons (under US ‘fair use’ allowing legal use of commercial material). Scalar allows a high level of interactivity with the material of (virtual) books and learning materials.

Aim to expand proces of scholarly production and to rethink education. For example, USC has a new PhD programme in media studies in which PhD students make (rather than write) a dissertation- see Take Action Games as an example.

Thinking about scholarly practice in an era of big data and archives: valuing openness; thinking of users as co-creators; assume multiple front-ends/ interfaces; scales scholarship from micro to macro; learning from experiment and artistic practices; engaging designers and information architects; value and reward collaboration acros skills sets.

Scalar treats all items in a data-set as at the same ‘level’ so affording alternative and different ways of examining and interacting with the data.

USC School of Cinematic Arts has a long history of the use of multi-media in assessment practices and the development of criteria. Have also developed guidance on the evaluation of digital scholarship for appointment and tenure. The key issue here has been in dealing with issues of attribution in collaborative production.

…………..

Now moved on to the next sessions of the day with Jeremy Knox who is research open education and questioning the current calls for restructuring higher education about autonomous learning  and developing a critique of the open education movement. He is discussing data collection on MOOCS in terms of

  • Space
  • Objectives of education
  • Bodies and how the human body might be involved in online education

Starts with discussing what a MOOC is as free; delivered online and massive. Delivered via universities on platforms provided through main players such as Udacity, Coursera and edX.

Most MOOCs involved video lectures and quizes supported by discussion forum and assessed through an automatic process (often multi-choice quizes) due to the number of students.

Data collection in MOOCs as example of big data in education allowing learning analytics to optimise the educational experience including through personalisation of the educational experience.

Data collected specifically from the MOOC platforms. edX claiming to use data to inform both their MOOC delivery but also to inform development of the campus based progress at MIT

Space – where is the MOOC? edX website includes images of campus students congregating around the bricks and mortar of the university. Coursera makes use of many images of physical campus buildings. Also many images of where students are from through images of the globe – see here

Metaphor of the space of the MOOc is both local and global.

Taught on one of the six MOOCs delivered by University of Edinburgh. Students often used visual metaphors of space in their experience fo the MOOC – network spaces, flows and spaces of confusion. Also the space metaphor used by instructors in delivering MOOCs such as in video tours of spaces. The instructors seeking to project the campus building as the ‘space of the MOOC’ and this impacts on the student experience of the MOOC. The buildings may have agency

What else might have agency in the experience of education? For example, book as a key ‘tool’ of education. Developed a RFID system so that tagged books send a Tweet with a random sentence from the book when placed on a book-stand/ sensor as a playful way of collecting data. So twitter streams include tweets from students/ people and books.

Another example is of YouTube recommended videos recontextualises video with other videos as a mesh of videos and algorithms.

The body in the MOOCs? Is taken in to account through Signature Track that uses the body to tract the individual student.  Now showing a Kinect sensor to analyse how body position changes interaction with a MOOC course which allows the body to intervene and impact on the course space.

How does the body of the teacher be other than the body of external gaze?

……….

Now moving to a Skyped session with Sophia Lycouris Reader in Digital Choreography at Edinburgh College of Art and is working on research in using haptic technologies to enable people with impaired sight to experience live dance performance – see here. A prototype has been developed to allow users to experience some movements of the dance through vibrations. Again, uses a Kinect.

The project explores the relationship between arts and humanities and innovations in digital technology as trans-disciplinary alongside accessing and experiencing forms of performing arts. In particular, interested in how technologies changes the practice itself and how arts practice can drive technological change (not just respond to it).

The Kinect senses movement which is transformed in to vibrations in a pad held by the participant.

Discussing some problems as Microsoft now limiting code changes needed for the project.

The device does not translate dance but does provide an alternative experience equivalent to seeing the dance. The haptic device becomes a performance space in its own right that is not necessarily similar to a visual experience. So the visual landscape of a performance becomes a haptic landscape to be explored by the wandering fingers of blind users.

The project is part of a number of projects around the world looking at kinesthetic empathy.

Question on what models are being used to investigate the intersection of the human and the digital? Sophia focuses on using the technology as a choreographic medium and away from the dancing body. Jeremy’s research underpinned by theories of post-humanism that decentres the human: socio-materialism; Actor Network Theory and spacial theory.

…………

Now on to Mariza Dima on design-led knowledge exchange with creative industries in the Moving Targets project. Focusing today on the methodological approach to knowledge exchange.

Moving targets is a three year project funded by SFc for creative industries in Scotland including sector intermediaries and universities to involve audiences in collaboration and co-design. INterdisciplinary research team including design, games, management. The project targets SMEs as well as working with BBC Scotland.

Knowledge exchange as alternative to transfer model. Exchange model emphasises interaction between all participants to develop new knowledge and experiences. Used design as a methodological approach in the co-design of problem identification and problem-solving.

Used experiential design which is design as experience – the designer is not an expert but supports collaboration; transdisciplinary; experience and knowledge is closely related and interactional working in context of complexity.

Process stages of research; design and innovation. Innovation tending to incremental improvement that returns to research. Knowledge is developed as a concept through research and as an experience through design and innovation.  Phases:

Research involves secondments in to companies as immersion researching areas for improvement, gain and share knowledge and undertaking tasks/ activities. Example of working with CulturalSparks on community consultation related to cultural programme of Commonwealth Games 2014. Research workshops were also held on a quarterly basis.

Design of interventions with companies and audiences using e business voucher scheme. Ran a number of proto-typing projects including looking at pre-consumption theatre audience engagement.

Innovation based on two streams: (a) application of knowledge within the company and (b) identifying transferable knowledge. Have developed new processes, digital tools and products with an aim of creating longer-term impact of process improvements and tacit understandings by both the companies and by the universities/ intermediaries.

Experience of the clients very variable. Agencies much more receptive to working with higher education while micro-enterprises were more cautious as have limited resources. So with company, took a more business-like approach focused on outcomes and have gained positive impact.

The focus project is on supporting creative industries companies to engage with rapid changes in audiences driven by technological changes.

 

Now onto looking at invisible work in software development; data curatorship and invisible data consumption in industry, government and research. Research framework is base don the social shaping of technology; infrastructure studies and the sociology of business knowledge.

Focused on climate science due the importance of the interface between data and modelling projections through software; also in modelling data in manufacturing. In manufacturing is a question of generic software vs localisation via specific vagueness where metadata is under-emphasised and developed. While sharing data in government involved a more specific focus on curation of data and sharing data without affecting data ownership. Discourse on disintermediation tends to downplay costs of co-ordination particularly in respect of trust relations.

Data consumption linked to issues in data visualisation that aggregates and simplifies data presentation with careless consumption of data. Consumers have preference for simplified visualisations such as the two-by-two matrix to aid prioritisation. Such matrices become the shared language for users and the market or are amended as different simplified visualisation such as waves or landscapes.

The specific vagueness of the software ontologies makes comparability across platforms and contexts of the data becomes impossible.

Study on ERP involved videoed observation; situational analysis used in study on government softwares to generate grounded data analysis and study on data visualisation involved direct interviews of providers and users of data.

Ontologies discovered as useless – a life changing discovery!

MOOCs and business models in higher education

A great post from @audreywatters here on the education technology start-up ecosystem. This includes asking what the impact of Venture Cap maybe which is something that makes academics (at least in the UK feel uncomfortable about. Indeed, one argument for the development of the OU’s Futurelearn platform – however, this is still operated as a separate company majority owned by the OU.

Also, worth noting the following quote:

Can we (please!) foster a resurgence of open source in education? This isn’t simply about schools running their own Linux servers either (although I wouldn’t mind that). It’s about supporting open source development — community development, capacity building, technology tinkering, bug fixing, and most importantly perhaps, transparency. Can education startups be leaders in developing and supporting openly licensed materials (code and content), helping wrest education free from the control of proprietary businesses?

Which was shortly followed by the announcement that edX was releasing its MOOC software as open source.

While these developments are taking in place in a context, at least in the UK, of longer-term declining funding for higher education and the response of the sector in reducing its major cost component, people:

made good efficiency savings during the year. The most significant of these savings related to staff costs, which fell in real terms for a second consecutive year in 2011-12

So, one key question on the future of higher education is whether the prominence of MOOCs is leading to the university as an aggregator, mediating the content and technologies of an ecology of academic and technology and content firms – the university as commissioning agency?

LinkPool [10032013]

A few links and bits n bobs i’ve been looking at in the last week or so:

Current e-learning practice: is a nice brief summary from Stephen Downes on what he’d include as illustrating current e-learning practice. He suggests three areas to show the span of e-learning as (a) ‘traditional’ asynchronous self-paced  courses; (b) a educational VLE (BB, Moodle etc.) showing learner interactions and (c) an example of a MOOC

Future technology trends from Forrester. Nothing particularly new or outstanding here but a useful list in one place. Includes: gesture based interfaces; big data and real-time data; collaboration tools; the internet of things; GPS based devices and services; cloud and a very vague

New federated trust and identity models for a changing world of jobs and careers … and maybe even killing all usernames and password

Self-regulated learning in the digital age SRL)from Steve Wheeler a few months back makes for a useful read especially given the constant interest in MOOCS and their shift to becoming credit bearing (Inside Higher Ed’s discussion of some of the issues here is a useful read – raising the position of ePortfolios). I’m currently involved in a project on transversal competences and in particular, “learning to learn” which I see as being (a) one of the few truly transversal competences and (b) critical to future employability. It seems to me that SRL and self-directed learning is assumed to be unproblematic by the proponents of the disruption of (higher) education but more on this later.