Tag Archives: innovation

Distributed governance of technological innovation through the case of WiBro in S. Korea.

I attended the Social Informatics Cluster meting to hear Jee Hyun Suh present on: Co evolution of an emerging mobile technology and mobile services: distributed governance of technological innovation through the case of WiBro in S. Korea. These are rough notes taken during the presentation.

She presented the story of WiBro and the implications for the governance of large scale technological innovations for technology companies and government. WiBro was initiated from 2001 as a national R&D programme for high speed portable internet, it was harmonised with national and international standards (WiMax) and went to a commercial launch in 2006. It is widely seen as a case of market failure despite a successful technological innovation.

The research objectives were initially to examine the socio-technical factors in the development of the technology and the gap between the visions and outcomes of the technology commercialisation and explore the governance of large scale and complex innovations. The technology’s development was interpreted through social learning processes with a particular focus on building alignments between the technology, service evolution, standardisation and social learning within a wider development arena of R&D.

Over the course of the research period, 2001 to date, the focus of interest shifts from design & development of the technologies to a commercial focus on then on to a focus of the service evolution. The WiBro development was linked to broader policy imperatives of positioning S.Korea as innovation leader.

The technology itself was predicated on a problematisation of the inefficient use of 2.3 GHz and then enrolment of stakeholders to co-shape a generic vision of the using bandwidth portable internet service. This became co-evolved with drive towards a High performance portable internet and processes of standardisation.Standard setting closely linked to bandwidth/ spectrum allocation. Became conceived as a seamlessly interlinked innovation process. but different interests and objectives across stakeholders remained unresolved especially between focus on tech dev vs commercial exploitation through existing technologies. Also shifting alignments around adoption of differing international standards. The technology had been successfully developed and as pre-commercial produce was show cased at APEC 2005.
Commercialisation occured around processes of spectrum licensing. Again, different visions for WiBro, eg as an extension of fixed line services, as a differentiated service and as a complementary service to existing mobile networks. These different visions were rolled into different commercial aims eg, early market advantage vs emphasis on interoperability, adoption or blocking of VoIP as well as the emergence of 3G services. The later development of 4G mobile resulted in shifts to the vision of WiBro and how it should evolve.
Also, the commercial focus bifurcated on domestic versus a global market focus. In the domestic market, there could be seen the dynamics of trail and error on finding niche markets for WiBro, eg, mobile routers, digital shipyards, WiBro-Taxi. This market learning processes occurred despite tensions between players and their visions for the service.
The argument presented was that the ‘problem’ of WiBro should be framed in terms of uncertainties in innovation processes rather than in terms of a failure in diffusion/ commercialisation. So the coordination challenges and dispersed arenas of innovation enabled key players to interact in the social shaping of this particular technology highlighting the importance of stakeholder reflexivity and flexibility in large-scale technological innovations.
It was also noted during the Q&A that WiBro coincided with the testing and general failure of attempts at developing national technology champions that could then be exported in to global markets.

For more on social learning processes in innovation diffusion, see:

weeknotes [25082014]

Over the last couple of weeks, my time has been spent on:

A picture of various draft word processed documentssupervising Masters students on the dissertations with most submitting last week

working with three part-time students as they start their dissertations

developing a couple of ideas on a new course involving what is, I think, an innovative structure. More to follow on both of these

piloting discourse analysis for my PhD studies which is both interesting and slightly overwhelming – I mean, how much data can I really use?

writing a couple of papers for (hopeful) publication

preparing for teaching starting in a couple of weeks on two online courses: Digital Environments for Learning; and Course Design for Digital Environments

planning a Course for a different programme on Managing Organisational Learning & Knowledge (MOLK) that will be a blended Course starting in January 2015.

attended an interesting workshop on employability for postgraduate students as part of the Making Most of Masters project. The emphasis on employability is being partly driven by changes in the PGT market as student recruitment is counter-cyclical to the economy. Hence the market for PGT students is expected to become more competitive and requiring HEIs to develop key added-value offers to students which often revolve around issues of employability, employment outcomes and employer engagement.
The Making Most of Masters project started with mapping what work-based learning was already taking place, then defining a model for work-based dissertations and delivering and refining the model to finally generate a self-sustaining model. This is essentially a toolkit for running work-based dissertation projects.

The focus for the next couple of weeks will be on finalising the draft papers and preparing for the teaching…. and, of course, marking dissertations….

Open innovators

There’s an interesting series of blogs from Nesta and 100%Open on a joint project on supporting open innovation in charities which can be found here. The main common points emerging for charities to further develop, although these could be applicable for any organisation, are:

Breaking down internal siloes

Focusing innovation investment on core business concerns such as increasing giving

Taking well managed risks and not being afraid to be seen to ‘fail’

Developing a culture that embraces testing of ‘imperfect’ ideas as a way of developing ones that will work 

Again, the emphasis is placed on organisational learning through testing, iteration and “failing fast”.

IT Futures Conference – Disruption

Here’s my attempt at live blogging the IT Futures conference at the University of Edinburgh IT Futures conference on the theme of Disruption. The hashtag for the conference is #itfutures

The conference is starting with an address from the Principal, Sir Tim O’Shea on disruptions, predictions and surprises and the need for systematic thinking especially on what really is surprising in teaching, learning and research activities. He is largely to talk about the student experience but points to the importance of IT for research activities is also important, and pointed to the use of computational modelling in the recent chemistry Nobel prizes.

Disruptions described as ‘the pretentious bit’ and lists as disruptions: nouns and verbs; tilling and fire; writing and printing; machines; engines and electricity; telegraph/ phone/ vision and then computers. Notes that the telegraph was hugely disruptive to diplomacy and the role of the ambassador by allowing leader to ‘talk’ direct to leader.

Describes a computer as an amplifier of cognitive abilities. The question is whether MOOCs are disrupters of HE? Reflects that the printing press and the OU did not fundamentally disrupt the lecture-led HE model. So large changes can still be non-disruptive.

The major predictions of:

  • Moore’s law that the power of computers will double every 18 months and will stay true for another 8 years;
  • Metcalf’s law predicted that the internet will ‘fall over’ early 2000’s due to volume of traffic proved not to be true
  • Bayes’ law on probability
  • Semantic networks predicted from 1960s so Google should not be described as surprising
  • Cloud – first described in 1960s as software as a service
  • Intelligent Tutors – look to 1962 for first description of an intelligent tutor.

Minor predictions such as the iPad as a personal portable device along with ICT integration (iPhone); robots; videophone; personalised instruction; cybernetics; and speech recognition predicted decades ago.

So what are the big surprises?

  • that Moore’s law is true and Metcale’s law is still false (due to redundancy in the system)
  • Facebook and Twitter
  • Google Translate using Bayes’ Law
  • Very personal computers
  • Netscape business model – give the product away for free and work out monetisation later.

Smaller surprises include the World Wide Web; Third World take-up; face recognition now; mouse and take-up; reliability; MOOCS.

ICT characteristics: as a memory prosthetic; ubiquitous; revered time travel; disrupted highly redundant; very cheap; garage start-ups (HP) – which is mainly the point of massively reduced costs of entry now.

The educational opportunities:

  • OERs especially software
  • natural languages – points to the translation of MOOCs by volunteers including in minority languages
  • visualisation of models and data
  • wisdom of crowds – see the astromoly MOOC with volunteers discovering new stars/ planets
  • Big data – in health, social data, physics
  • Fast feedback
  • Universal access – “to the blessings of knowledge”

The challenges are in: reliability; security; platform sustainability – most platforms we use now will probably not be here in ten years so need to design for platform independence); planned obsolescence; enquirer to alumnus (a single integrated student IT model); internal IS silos and appropriate assessment. Appropriate assessment is one of the larger challenges and innovation is needed here as traditional assessments are often inappropriate.

Implications for HE are varied: a squeezed middle model where MIT, Stanford will be OK as will Manchester Met as a local vocational HEI will also be OK. The top 100 will be OK. Student mobility, pick & mix and credit accumulation will be (finally) realised as a workable model. This has some interesting implications as Edinburgh perceived as the best University in the world for literature.

The assets of the University of Edinburgh: Informatics and high Performance Computing are key strengths; the University has won Two Queen’s Prizes both for e-learning (in teaching Vets and in teaching surgery both at a distance); Edina; Institute of Academic Development and the Global Academies; Information Services and leading in European provision of MOOCs.

Trends of changes:

  • e-journals and e-books massive growth in both availability and use
  • but also the number of library visits has increased (doubled in ten years)
  • students now increasingly own a computer (99% now have their own).

Which suggests: more MOOCs; more online postgraduate programmes; more hybrid undergraduate programmes (eg, drawing on online resources including from MOOCs); advanced ICT partners; radical experiments; learning analytics is key along with innovation in assessment. Describes stupid schools as those that have not developed online programmes and/ or MOOCs. In terms of partnerships, the University needs to be selective and ask what is in it for us in terms of learning from partners. New Chairs in Learning Analytics and in Digital Education were confirmed.


Q: why use the term ‘disruption’

A: that conference organisers used contemporary business school jargon and prefers challenge and opportunities

Q: You’ve discussed how you cannot assume that the ICT incumbent is immune to these global changes so why apply that to universities?

A: in pre-MOOC world innovations were led by smaller niche universities but now what has changed is the scale and impact of MOOCs led by leading world universities. But no institution is safe and it is still the case that smaller institutions can generate ‘disruptive’ innovations. This is a reason for the need for radical experimentations.

We’re now moving to the keynote talk from Aleks Krotoski of a 30 minutes recoded presentation then she’ll join us for Q&A and a response from Chris Speed.

Asks why online information is rarely subjected to the critical thinking that other sources are subject to (journalism, politicians, teachers etc.). Technology is a cultural artefact created by people with particular interests, tools, at a specific place etc. so technology is also art.

So what is in the frame – taking from cinema – to create compelling story-telling but also leads to the question of what is outside the frame. The same is true of software but we lack a recognition of this or also how to question them.

Context is key: your perspective on the ideas about world depends on the context of when you receive the idea and so context cannot be taken account of by machines. Are we being manipulated by men behind the curtain

Tech is being developed on a wider societal and cultural context – see how computers replicate the office environment. Features of technology define what can and cannot be done with that technology.

Digital identity: how define being human. Many aspects of sense of self, names, user names and can this be translated into software. Digital identities are assigned to any ‘thing’ – a person, group etc.. and assumed to be either true or false. But identity changes in context and over time and this is difficult to capture in software. But defines the human online but also reflects biases of engineers in presenting us as us. Bie, google’s predictions based on algorithms depends on biases of the engineers and the results appear to be relevant but not necessarily so and presents outputs based on observed behaviours. It also assumes all sources of data are equal and that quantitative judgement are superior.

Facebook: social networks as platforms for self-expression and create online identities but how and what you can express is constrained, eg, by skills in photography and writing; categories of FB profile choices which are really based on FB needs for data for advertisers; you must use your real name so is an identity authenticator so cannot experiment with anonymous identities.

Life recognised by common ‘beats’: graduations/ coming of age etc. but can be very personal such as personal crises or fantastic experiences that fundamental changes  – a life change. You’re not deleting your past but reconsidering it and re-visit those experience. But these artefacts of your past can be used against you? While people will recognise that people change, the web does not forget and treat each ‘beat’ as occurring now. Online does not allow or consider how we might change and develop as a person or even have died.

But this is a human not technological problem to be resolved by people when we assess online information – information should be assessed by people. We don’e acknowledge that online information is partial and limited.

Educators at the frontline of digital technology use: don’t assume students have the skills to use technology; don’t use systems you don’t understand; encourage the use of multiple personalities for social development; be critical of technology and the information from technology. Engineers/ developers may not have your best interests; demand software works to meet your needs not the other way round; avoid being constrained by technologies; consider the concerns and biases of the developers when using software.

Highlights how we’ve developed effective media literacy over 200+ years but seeing biases in software and platforms is harder for us to understand including within the algorithms. So what is valued by software may not be what we, the user, values. Discomforting experience of being online is often that software assumes an immutable, singular and quantifiable identity.

Now we’re moving to Chris’ response:

Chris describes self as a fine artist working in digital spaces but finds doing the ‘self stuff’ difficult. Presents a model showing four interpretations of one living room by different people so things like the sofa and TV changes in prominence and importance. There is no consensual space.

As part of an internet of things project various sensors have been placed in Chris’s house including in the toilet. Also disrupts the domestic setting due to reinterpreting spaces in terms of collecting data.

Aleks positions this work as reflecting on ourselves through data and quantified self. But why have you chosen to do this?

Chris: its part of an ESRC project on digital economy and looking at the thing as part of an experience. The artefact can be part of the ‘beats’ of life. If ‘things’ are contextual we should look at correlated data from multiple ‘things’ that better captures the interactions.

Aleks: can’t see the point of much of internet of things except on data capture on eg, resource use. What is the politics of these technologies

C: interested in the disruption of this experiment. Recognises some of the concerns but also wants his children to be lead-users

A: focus on children makes mistakes and should be allowed to make mistakes but what does making a mistake online mean if the web doesn’t forget?


Q: ppl have always left snapshots but now leaving many more and are searchable but we’ve always understood the limitations of interpretations and so could transfer that understanding that the artefact is not the person to the digital age.

A: the key point is that it is now searchable and so raises that question of techofundamentalism  is that we don’t appear to recognise that technology is not neutral and don’t query where and how the information comes from.

Q: Zuckerberg has stated that privacy is dead but this is a normative statement, but is this possible?

A: no and Zuckerberg has created privacy around himself. To chnge attitudes and norms, there needs to be a lot more people saying the same thing – that privacy is dead – to change attitudes and behaviours of people.

Q: distinction between online and psychological identity – but both involve picking out from everyone else, in the former, by the etch and in the latter by the brain

A: people playing more with playing with sense of self online – could AI develop to the point that it could fool us in to thinking we were conversing with a person. But this is enormously complex and difficult. But people are getting closer, eg, sentiment analysis is slowly improving – combine AI and social science in a nexus that replicates an identity. But we don’t understand the brain and so difficult to reverse engineer. Also highlights that online identity is still some form of authentication of self.

Q: technology only cares about efficiency and that people are being taken over by a dictatorship of efficiency but the beats of life are not efficiency. Is it efficiency that disrupts our lives?

A: Great question! But social rituals can be a form of social efficiency. If we know someone is married that that signals that person has moved to a particular point in their life – interpretive efficiency – and so context specific. Although this is different from the quantitative basis of efficiency in software but how can software account for these softer notions of human efficiency.


…. just back from break.

Now up is Tim Fawns, e-learning coordinator for Clinical Psychology and is speaking on opportunities for deep reflection on collected data – and challenge the assertion that we don’t need to remember anything anymore.

Works on the notion of blended memory and that the external context and internal memory are co-dependent.

His research is on digital photography and memory as the practices and conventions on behaviours around photography are changing rapidly. Is talking today specifically on reflection in terms of linking with what we already know. Reflection takes time, energy and sustained attention.

Changes in photography have been rapid since 1990s and change to digital photography. By 2011 more photos were taken on mobile phones than stand-alone cameras.

We depend on photographs for our memory. Taking a photograph of an object impairs your memory of that object with looking at the photo. Does this matter? Well yes, if we don’t remember and reflect on events than we learn less  from experiences.

From his research noted that people took a lot of photos of significant events and that people are not very selective as few photos were deleted even if very poor images. People take so many photos that it may detract from the experience as well as saturated with images. People rarely did anything with the photos unless being used for something specific – forming a slide show or sending to others.

Flickr was used for broadcast purpose and little concern with you was viewing these images. On FB people tended to sanitise their discourses around the photos as may be not certain who would and could view the images and discussion of them.

So we’ve ended up with more information than we can process. Photography has shifted from preserving the past for future remembering to recording the present and moving on.

Some similarities to other technologies, ie, broadcasting to Twitter and a compulsion to be aware of everything going on in a network and the fear of missing something. Also has 322 articles stored on Mendeley and collecting articles that will never be led. Suggests that the more PDFs collected leads to fewer being actually read.

Discusses different image projects and memory maps as ways of reflecting. In an educational perspective, he points to multimodal assessments and how different components interact to be greater than the sum of their parts.

Again, emphasis that the issues/ concerns with surface reflection from technology is not a result of the technology itself but is rather a cultural context towards the surface and individual choices.

Q: confused by the changes in the talk between describing what we’re doing and what we should be doing. Which were you describing?

A: Both – we can see evidence of better behaviours of more reflective use and discussion of artefacts but also can see many examples of surface and unreflective use of technologies.

Q:  Reflecting on the quantified self trends and the creation of online data about ourselves and so wondered what the opportunities of technologies to support reflection?

A: as the tools approve, eg, facial recognition, tagging, you can start generating algorithmic analysis of your behaviours but the individual episodes remain the main point of interest.

Q: what might be the implications of technologies like blip-photo and snap-chat

A: these are interesting. Blip-photo is about recording one photo a day which is a strange way of recording a day. Snap chat as a response to privacy concerns but can promote  more negative behaviours, ie, sexting.


Now moving on to James Fleck on innovation and IT Futures.

Passion has been on innovation and technology development and has recently retired form the OU Business School.

Is interested here in notions of innovation and disruption.

Innovation as how ideas become real- for practical purposes and having impact. Innovation has been a field of serious study for 40+ years but has been on the margins of academic departments but is now centre stage and everyone is piling in. But while new ideas are emerging but also the rigour may be being diluted, especially in the use of the term disruption as meaning any level of change. So would like to look at what is innovation and disruption.

Innovation involves many components including individual characteristics such as creativity and problem solving but does extend to national systems. Risk-taking seen as important but innovators tend not to be risk-takers but rather know that their idea is good and requires persistence and resilience. Not failures but trials.

Context is important and systematic understanding of the industrial and policy context linking to innovation.

What are the key ideas in applying innovation to ICT:

  • incremental innovation: a linear model from invention to diffusion either as innovation push or market-led pull innovations. Used in consumer goods, car production, pharmaceuticals but not ICT
  • In ICT innovations tend to be in configuration and innovation is bringing different components together in a new way, also practices around the technology
  • mobile and platform technologies are a new categories. Points to the growth in mobile phone use across the world.
  • disruptive innovation – from Schumpeter’s radical innovation and creative destruction. Also a sense of discontinuity combining new technologies and how these are received (in terms of configuration with culture and society) – Christensen – some technology innovations bring in new markets and user and push out the older technologies. So the real issue is how the technology interacts with the users, eg, from mainframes to PCs; HE and the OU?


– the electronic newspaper changed interaction with news journalism which has now been realised through citizen journalism

– discussed a contraceptive aid based on measuring hormones in urine was a failure but a success when marketed as an aid to fertility

– the OU has very good student experience feedback despite low number of full-time staff. But courses are designed collectively and tested with students and relies on tutor support as learning content is a commodity and easily accessible. OUBS also able to develop a practice route by delivering work-based learning offer. But OU is not disrupting the HE system but rather sustains the system. The key component here is the pedagogy rather than the technology.

Looking at MOOCs, the numbers of students are comparable to 19 century correspondence courses or the downloads from iTunesU. What is different is the involvement of prestigious institutions. The key question is where is the tutor interaction, eg, the pedagogy and the content is secondary.

The system of HE with pedagogy at the core, interacting with practice, technology, policy, students, staff etc… is relatively stable over time.

In conclusion, technology alone is not disruptive but the wider context. HE has a very stable ecology of stakeholders and so is more resistant to disruption. Asks the question of what HE is for and places the learning lower down – priorities are for social networking, moving to becoming an independent adult, finding a mate, etc.

Technology capacity for capturing and storing data is increasingly growing and allows increasing access to material – Galileo’s note books as high resolution images available to all. We are all potentially innovators.


Now time for lunch …


Back from lunch and the closing key note from Cory Doctorow 

To start with a proposition that computers are everywhere and all things are computers. For example, the informatics building depends on computers and would not function as a building without computers, the same could be said for cars or a plane. And we increasingly put computers in our bodies, ie, cochlea implants but also personal music players … defribrilator implants also a computer.

Also, almost everything depends on computers for its productions.

We hear a lot about computer crime and failure. In part it is novelty, so of an interest in the way that clothes that criminals wear to commit their crimes are not interesting, So we hear a lot about regulating computers to fix their flaws and politicians use some heuristcs of where to apply regulations: (a) general technologies, eg, a wheel, are best not regulated; (b) if specific technologies can be subject to regulation so if we ban car drivers using mobile phones, the car continues to function as a car.

Computers are both general and specific and complex and have general properties that make them difficult to regulate.

Regulate the use of a computer by installing security software, DRM etc… but will allow a back door to  over-ride such software (but assume that only the ‘good’ guys will use the back door) .

Describes the notion of Turing Completeness that designs a computer or language to be able to run any programme computer.

Need to recognise that where no demand, that regulations in computers then this will be worked around/ subverted by people, eg, DRM, mobile phone lock-in etc.. But is illegal to show how this is done but people will find ways to subvert these constraints.

Is currently discussing basics of cryptography  and decrypting protected software as an illegal act. Cryptography used to force onto customers things that customers don’t want, eg, inability of DVDs to play in different regions, unskippable adverts (as the last place for unskippable adverts left). So these restrictions are key to business models. But also these restrictions constrain innovation – points to Open Software and Ubuntu as example of what innovations can happen when restrictions on adding feature and changes are removed.

Also, these constraints can be delivered as hidden software on computers that, eg, stop you ripping DVDs. But these are vulnerabilities to hackers and allow introduction of viruses.

Also, using laptop recovery  software used in law enforcement to monitor people eg, suspects, school pupils etc…used by law enforcement but also by criminals.

So the idea of installing the back door in PCs is the wrong response to the problems with computers as such back doors/ hidden software encourages new crimes to be committed. So that computers are vulnerable and this represents a crucial threat to individual freedom.

What to do?

Learn how to encrypt your email and hard drives but you’re only as secure as the people you interact with.

But also we should insist that digital infrastructure and regulations are robust and effective in protecting us – by joining the Open Rights Group; Free Software Foundation; Electronic Frontier Foundation.

Learning innovations and digital education

An interesting report on Technology Enhanced Learning (TEL) from Open University based academics. The report discusses:
1. what is TEL but in terms of technologies “add value to” (enhancing) teaching and learning rather than being indivisible from or enmeshed in teaching and learning. Can you imagine teaching and learning without any technologies (digital of otherwise)? This section does include some useful references to the European and UK policy frameworks including networks such as STELLAR. The framing of education in terms of being a service, as media production and broadcasting (xMOOC?) or as a conversation is useful. The discussion of the education system as being stable and acting as a ‘constraint’ on digital education innovations is also useful – that the education system is the more powerful network and slower to transform which affects what is possible in terms of digital-led innovations in education. So analysis of innovations in digital education should be framed by an understanding that:
New technologies follow complex trajectories often supported or thwarted by other technologies, infrastructural issues, competing standards, social systems, political decisions, and customer demands. [p17].
The report goes on to note that the web was started at CERN as a tool for learning through information sharing. The emphasis here is on innovation occurring within contexts of communities, practices as well as technologies. The discussion of success stories includes mobile learning pointing to the MOBilearn project supported by the European Commission as well as the BBC’s Janala language learning service but doesn’t really discuss the growth of smart phones and tablets as means of going online. In effect, learning technology design needs to be responsive to the requirements of these devices. Other success stories cited include Scratch and xDelia.
In examining the situation for research and innovation in digital education, the report points to certain disadvantages compared to other ‘scientific’ areas in terms of the coherence of the research agenda and the lack of a single focal point for innovation such as a single technological solution. The report notes the difficulties of creating a compelling narrative around how technologies are used to enhance learning. The report notes that: there is a need to reassess the use of computer technology from an educational, rather than a technological, perspective; and develop a more sophisticated conceptual model of how ICT can facilitate teaching and learning in the classroom..[p23]. The recommendations on experimenting in how technologies can be used to enhance informal learning (in the corporate sector), in ensuring research findings are made available inside and outside HE and that research is increasingly undertaken as applied research (mode 2 knowledge production) are welcome.
The section on the innovation process in TEL positions innovations involving pedagogy and technology combining in to emergent practices supported by communities of practitioners operating within wider sectoral ecologies and contexts. Given the emphasis on practice and complexity, the report finds TEL innovations depend on innovators as bricoleurs as someone who makes do with whatever is at hand. However, successful innovations depend on bricolage that also takes the wider learning complex into account and where innovations can take decades to diffuse fully. The report goes on to promote a design based approach to research and evidence-based innovation.
While making a number of recommendations for researchers and [research] policy-makers, the report concludes The focus for future TEL research should be on effective transformation of educational practices, rather than small incremental improvements.

LinkPool [18092013]

Here are a few links of interest:

Harold Jarche reviews Gary Klein’s “Seeing What Others Don’t” on how insights happen and provides an effective scaffolding for reflecting on and in action and in the importance of stories in sense-making. I can see these models highlight in the review as pragmatic approaches to operationalising the probe-sense-respond approach in the Cynefin model.

Tim Kastelle has posted on building your experimental capability with the key statement that capabilities for experimenting are key to innovation:

The second big idea is the focus on learning. If you try an idea and it doesn’t work, and you don’t learn anything from this, then it really is a failure. None of us have enough spare resources to afford this. Nevertheless, to innovate we have to try out a fair number of ideas that end up not working as we expected. This is only feasible if we structure things so that we learn from our experiments.

In essence, the steps are pick a problem; work out what a solution might look like and how you would identify that you have succeeded; do something; learn from what worked and what didn’t and keep building on what works.


Digital Scholarship: day of ideas 2

I’m listening now to Tara McPherson on humanities research in a networked world as the opening session of the Digital Scholarship day of ideas. (I’ve started late due to a change in the start-time).

Discussing how large data sets can be presented in a variety of interfaces: for schools; researchers; publishers and only now beginning to realise the variety of modes of presenting information across all discipline areas. But humanities scholars are not trained in tool building but should engage in that tool building drawing on their historic work on text, embodiment etc. and points to working with artistis on such interpretive tool building – see Mukurtu an archive platform design by an anthropogist based on work with indigenous people in Australia. Tools allow indigenous people to control access to knowledge according to their knowledge exchange protocols.

Open ended group create immersive 3D spaces but is not designed to be realistic but engaging. More usually found in an experimental art gallery. Also showing an example of a project of audio recordings of interviews with drug users at a needle exchanges.

Vectors is a journal examining these sorts of interactive and immersive experiences and research. Involves ‘papers’ that interact, mutate and change which challenges the notion of scholarship as stable. Interactive experiences are developed in collaboration with scholars in  a long iterative process that is not particularly scaleable.

The develop of a tool-building process was a reaction on problematising interaction with data-sets. Example of HyperCities extending google maps across space and time.

The Alliance for networking Visual Culture including universities and publishers working together, reconsider scales of scholarship and using material from visual archives. Process starts with the development of prototypes. Scalar emerged from Vectors work as a publishing platform for scholars using visual materials. Allows scholars to explore multiple views of visual materials linked to archives and associated online materials linked to critical commons (under US ‘fair use’ allowing legal use of commercial material). Scalar allows a high level of interactivity with the material of (virtual) books and learning materials.

Aim to expand proces of scholarly production and to rethink education. For example, USC has a new PhD programme in media studies in which PhD students make (rather than write) a dissertation- see Take Action Games as an example.

Thinking about scholarly practice in an era of big data and archives: valuing openness; thinking of users as co-creators; assume multiple front-ends/ interfaces; scales scholarship from micro to macro; learning from experiment and artistic practices; engaging designers and information architects; value and reward collaboration acros skills sets.

Scalar treats all items in a data-set as at the same ‘level’ so affording alternative and different ways of examining and interacting with the data.

USC School of Cinematic Arts has a long history of the use of multi-media in assessment practices and the development of criteria. Have also developed guidance on the evaluation of digital scholarship for appointment and tenure. The key issue here has been in dealing with issues of attribution in collaborative production.


Now moved on to the next sessions of the day with Jeremy Knox who is research open education and questioning the current calls for restructuring higher education about autonomous learning  and developing a critique of the open education movement. He is discussing data collection on MOOCS in terms of

  • Space
  • Objectives of education
  • Bodies and how the human body might be involved in online education

Starts with discussing what a MOOC is as free; delivered online and massive. Delivered via universities on platforms provided through main players such as Udacity, Coursera and edX.

Most MOOCs involved video lectures and quizes supported by discussion forum and assessed through an automatic process (often multi-choice quizes) due to the number of students.

Data collection in MOOCs as example of big data in education allowing learning analytics to optimise the educational experience including through personalisation of the educational experience.

Data collected specifically from the MOOC platforms. edX claiming to use data to inform both their MOOC delivery but also to inform development of the campus based progress at MIT

Space – where is the MOOC? edX website includes images of campus students congregating around the bricks and mortar of the university. Coursera makes use of many images of physical campus buildings. Also many images of where students are from through images of the globe – see here

Metaphor of the space of the MOOc is both local and global.

Taught on one of the six MOOCs delivered by University of Edinburgh. Students often used visual metaphors of space in their experience fo the MOOC – network spaces, flows and spaces of confusion. Also the space metaphor used by instructors in delivering MOOCs such as in video tours of spaces. The instructors seeking to project the campus building as the ‘space of the MOOC’ and this impacts on the student experience of the MOOC. The buildings may have agency

What else might have agency in the experience of education? For example, book as a key ‘tool’ of education. Developed a RFID system so that tagged books send a Tweet with a random sentence from the book when placed on a book-stand/ sensor as a playful way of collecting data. So twitter streams include tweets from students/ people and books.

Another example is of YouTube recommended videos recontextualises video with other videos as a mesh of videos and algorithms.

The body in the MOOCs? Is taken in to account through Signature Track that uses the body to tract the individual student.  Now showing a Kinect sensor to analyse how body position changes interaction with a MOOC course which allows the body to intervene and impact on the course space.

How does the body of the teacher be other than the body of external gaze?


Now moving to a Skyped session with Sophia Lycouris Reader in Digital Choreography at Edinburgh College of Art and is working on research in using haptic technologies to enable people with impaired sight to experience live dance performance – see here. A prototype has been developed to allow users to experience some movements of the dance through vibrations. Again, uses a Kinect.

The project explores the relationship between arts and humanities and innovations in digital technology as trans-disciplinary alongside accessing and experiencing forms of performing arts. In particular, interested in how technologies changes the practice itself and how arts practice can drive technological change (not just respond to it).

The Kinect senses movement which is transformed in to vibrations in a pad held by the participant.

Discussing some problems as Microsoft now limiting code changes needed for the project.

The device does not translate dance but does provide an alternative experience equivalent to seeing the dance. The haptic device becomes a performance space in its own right that is not necessarily similar to a visual experience. So the visual landscape of a performance becomes a haptic landscape to be explored by the wandering fingers of blind users.

The project is part of a number of projects around the world looking at kinesthetic empathy.

Question on what models are being used to investigate the intersection of the human and the digital? Sophia focuses on using the technology as a choreographic medium and away from the dancing body. Jeremy’s research underpinned by theories of post-humanism that decentres the human: socio-materialism; Actor Network Theory and spacial theory.


Now on to Mariza Dima on design-led knowledge exchange with creative industries in the Moving Targets project. Focusing today on the methodological approach to knowledge exchange.

Moving targets is a three year project funded by SFc for creative industries in Scotland including sector intermediaries and universities to involve audiences in collaboration and co-design. INterdisciplinary research team including design, games, management. The project targets SMEs as well as working with BBC Scotland.

Knowledge exchange as alternative to transfer model. Exchange model emphasises interaction between all participants to develop new knowledge and experiences. Used design as a methodological approach in the co-design of problem identification and problem-solving.

Used experiential design which is design as experience – the designer is not an expert but supports collaboration; transdisciplinary; experience and knowledge is closely related and interactional working in context of complexity.

Process stages of research; design and innovation. Innovation tending to incremental improvement that returns to research. Knowledge is developed as a concept through research and as an experience through design and innovation.  Phases:

Research involves secondments in to companies as immersion researching areas for improvement, gain and share knowledge and undertaking tasks/ activities. Example of working with CulturalSparks on community consultation related to cultural programme of Commonwealth Games 2014. Research workshops were also held on a quarterly basis.

Design of interventions with companies and audiences using e business voucher scheme. Ran a number of proto-typing projects including looking at pre-consumption theatre audience engagement.

Innovation based on two streams: (a) application of knowledge within the company and (b) identifying transferable knowledge. Have developed new processes, digital tools and products with an aim of creating longer-term impact of process improvements and tacit understandings by both the companies and by the universities/ intermediaries.

Experience of the clients very variable. Agencies much more receptive to working with higher education while micro-enterprises were more cautious as have limited resources. So with company, took a more business-like approach focused on outcomes and have gained positive impact.

The focus project is on supporting creative industries companies to engage with rapid changes in audiences driven by technological changes.


Now onto looking at invisible work in software development; data curatorship and invisible data consumption in industry, government and research. Research framework is base don the social shaping of technology; infrastructure studies and the sociology of business knowledge.

Focused on climate science due the importance of the interface between data and modelling projections through software; also in modelling data in manufacturing. In manufacturing is a question of generic software vs localisation via specific vagueness where metadata is under-emphasised and developed. While sharing data in government involved a more specific focus on curation of data and sharing data without affecting data ownership. Discourse on disintermediation tends to downplay costs of co-ordination particularly in respect of trust relations.

Data consumption linked to issues in data visualisation that aggregates and simplifies data presentation with careless consumption of data. Consumers have preference for simplified visualisations such as the two-by-two matrix to aid prioritisation. Such matrices become the shared language for users and the market or are amended as different simplified visualisation such as waves or landscapes.

The specific vagueness of the software ontologies makes comparability across platforms and contexts of the data becomes impossible.

Study on ERP involved videoed observation; situational analysis used in study on government softwares to generate grounded data analysis and study on data visualisation involved direct interviews of providers and users of data.

Ontologies discovered as useless – a life changing discovery!

Innovation as knowing, experience and action?

These are some very rough initial thoughts that I hope to develop over a couple of posts.

Building on an earlier post on learning, creativity & innovation summarising

that (a) innovation occurs through learning and (b) learning is a social/ collaborative process (and so innovation is also a collaborative process)

it is clear that innovation is about people involved in interactions with an emphasis on action. It is only through doing things together that tacit knowledge can be exchanged. This is not about converting tacit knowledge into explicit knowledge which is probably a bit a of a myth. Rather this is about social interaction for co-creation of knowledge by doing together – so we can’t just be talking about imitation. So innovation is about novelty, co-creating new knowledge within existing interactions or through new and novel connections. As Ekvall noted, there is value in openess, trust, playfulness and humour in work. So the highly intangible assets of an organisation such as its culture are critical here, pointing to how innovation links HR practices and knowledge management. So innovation practices are intensely practical and organisation specific and wide – innovation cannot be concentrated in the R&D unit, new product development functions or a skunkworks

web 2.0 [links]

A new report has been published by the CIPD on web 2.0 in organisations/ enterprise 2.0 – the download is available here (at least to members anyway). The research is largely empirical and coming from a management innovation slant (as the researchers are from the Management Lab at London Business School). No surprises in the findings – more lip service than delivery, innovative potential of the technology undermined by managerial preoccupation with command and control, organisations still operate on one-way communication flows (broadcast not dialogue). It will be interesting to see how this research will ‘fit’ with the more qualitative (and larger scale) research on HR and web 2.0 by Graeme Martin, aso commissioned by the CIPD.

interesting [links]

A number of people have been interested in the new report from JISC – see here – on the lack of digital literacies among the digital natives/ gen y. I’m less concerned about issues of plagiarism (tho’ that’ll change now I’m an actual academic, but we have software to spot that sort of thing) but more the issues identified in the lack of competence to appraise on-line resources. The ability to appraise such resources is, I think, critically important if the potential value creation of Enterprise 2.0 is to be realised.

This brings me to Jay Cross who recently posted on performance support as a learning ecology or learnscape:

Today, the greatest leverage in corporate learning comes from building on-going, largely self-sustaining learning processes. This process orientation focuses on the organization’s architecture for learning, a platform a level above its training programs and regulated events. The learnscape is a foundation for learning that is self-service, spontaneous, serendipitous, drip-fed, and mentored as well as the formal training that will always be with us.

Yet if the new workforce have not (yet) developed the skills to appraise and pass judgement on the value and usefulness of online resources then the learnscape becomes far less valuable as a tool of enterprise growth and innovation. I hope I’m being unduly pessimistic as to me, enterprise 2.0 and learnscape provide clear concepts that support how organisations should operate to be value generating, competitive and human.