eLearning@ Edinburgh

I’m attending the eLearning@Ed 2015 conference and will be attempting to live blog throughout the day.

Melissa Highton, Director of Learning Teaching and Web Services here at Edinburgh is starting the conference and the theme of Designing for 21st Century Learning. Wanted to ask what 21st century learning might be and how it might be different from 20th century. Many aspects of learning and education have stayed the same, but differences around scale, technology, teachers and teaching and, in particular, “its not ok to not understand the internet anymore”.

Highlighting some trends in the sector from the New Media Consortium with trends around maker spaces, changes spaces for learning, BYOD, personalised learning and the wicked problems of recognition and reward for teaching.

Now moving on to a panel of Chairs in Digital Education on views of 21st century learning.

First up is Judy Hardy, School of Physics and Astronomy with personal view and concerns. Looking to the student experience in 2020 and what will it be like. IN many ways, it will be very similar to now: lectures, workshops and tutorials and self-study. But there will be much more extensive use of digital technologies. Uses an anecdote on research methods for honours students that includes a self-reflective assignment and many used cloud based tools and Facebook groups and these sorts of tools and working methods will be mobilised. Also cited research on active engagement in classroom teaching against more traditional (didactic) learning design that shows active engagement has massive benefits to learning achievement.

But why is there lecturer resistance. Cited a survey showing lecturers want to teach and take pride in their teaching competences. So what are the challenges: time – which is a proxy for many things; and pedagogical context, where innovations abandoned early or perceive too much choices. So there are challenges of awareness; ‘how-to’ knowledge and why innovations in learning are important – ‘principles’ knowledge – and understanding these three forms of knowledge are crucial to implementing improving teaching.

Next is Sian Bayne based in the School of Education and Prof of Digital Education. Sian’s talking about Dave Cournier’s Rhizo Mooc, that included Tweets on one of Sian’s papers that was a set reading. The paper was about striated and smooth space in online learning: striated spaces is formal, goal-orientated and ordered while smooth space is nomadic, open and wandering-orientated and these two metaphorical spaces do merge and their boundaries blur. We can map learning spaces on to striated and smooth spaces: striated spaces as VLEs/ LMS and smooth spaces as hypertext, linkages, multimodal assessments, wikis and blogs.How do these metaphors work in 2015 and we continue to have striated spaces in VLEs, progression, e-portfolios, personalisation, adaptive learning, learning analytics, gamification. But also increased smooth(er) spaces such as Twitter, YikYak, augmented realities, flipped classrooms, maker spaces, crowd-based learning. The bigger point is that this field is predominately future orientated with lots of trends forecasts which generates a change acceleration to adapt practices to the ‘next big thing’. But trends are contingent on the situated context (the University of Edinburgh) leading to questions of what sort of institute we want to be and what is the purpose of higher education.

Judy Robertson, Chair in Digital Learning talking about current work and using technology to support learner goal setting. A lot of her work involves user centred design for mainly school pupils related to behavioural change in education and in public health.  Typically games set goals for users but the interest here is user goal setting and setting appropriate goals. Currently developing a game to encourage behavioural change to increase activity levels. Can also be extended to realistic goal setting for students in their study skills. So the question is on designing technology to be helpful but not intrusive.

Critter Jam (FitQuest) is an exercise game for a mobile phone to encourage children to run around. The game includes being chased by a virtual wolf, or to pick up virtual coins. Children can select different goals such as topping the leader board, beating your PB, setting points targets (but how to select an appropriate points goal?). Her research is on self-efficacy and in patterns of goal setting related to increased performance. Also links to resilience in context of goal failure and adjusting goals accordingly – and this could be adapted to, for example, undergraduates.

David Reay from Geosciences and talking on distance education and the development of the MSc in Carbon Management involving the Schools of Business, GeoSciences and Economics. There was a clear demand from students for applied experience and so developed online learning as a response. Initially, developed a role play simulation with face-to-face learning and developed this for online learning that was delivered as part of the MSc in Global Challenges. So now there is a full online MSc in Carbon Management launching in September. He is also developing an online course in sustainability for campus based students linked to graduate attributes around understanding sustainability. Each student will look at sustainability in their subject area to understand what sustainability means and have an excellent online learning experience. His research is on climate change including online teaching and conferencing in terms of its environmental impacts including measuring the total carbon emissions for the online programmes. The intention is to off-set carbon emissions generated by the programme – to be the greenest masters ever!

Dragan Gasevic, Professor of Learning Analytics at the Schools of Education and of Informatics. Why learning analytics is important: especially in provision of personalised feedback loops for students that acknowledges their diverse needs. We use VLEs/ LMS but also rely on many other digital technologies for learning including on the web, using social learning, reflective learning through annotation technologies and blogs. In using digital technologies we are leaving a digital footprint. We have been collecting some of this data since the start of universities. We want to leverage this data to assist teaching, learning, policy-making etc. and this is the point of learning analytics. Learning analytics is about learning and this must not be forgotten – not just data crunching for its own sake but is purposive. Learners are not black boxes but are individuals with many different and not permanent traits, knowledge and understanding, The black box needs to be opened up to deliver the benefits of learning analytics. Looks to CLAS – collaborative lecture annotation system – but the key is to encourage learners to use beneficial technologies. So we have a duty to inform students on the benefits of a technology and to scaffold support for the students to use that technology. Found that students were more engaged with technologies in graded courses and came to internalise the use of the tool in either graded or ungraded courses. So if we teach our student to use a tool they will continue to use that tool even if that use is not required. Learning analytics support and validate pedagogy.

“Counts don’t count much is decontextualised”! We need to account for pedagogical context in learning analytics. Also, visualisations can be harmful especially in showing visualisations to learners/ students so we need to develop analytics literacy for students. We also need to scale up qualitative analysis to improve understanding of learners and to develop institutional policies to support the use of analytics. But the use of learning analytics is contingent for each institutional context – one size does not fit all!

Jonathan Silvertown, Biological Sciences, is talking about the project ‘virtual edinburgh’. The project will turn the city in to a pervasive learning environment for formal and informal education. The future is already here – such as WiFi on buses but also apps such as Walking through Time, LitLong (Palimset), Mesh, iSpot etc.. but virtual edinburgh will also allow interaction between users. Also look to the ‘nearby’ function on Wikipedia. These apps and functions will be linked together through virtual Edinburgh and draws on the teaching and learning strategy priorities on giving learners agency and providing technology to do that. Modes of interaction will involve existing and new apps, peer interaction, game play, new data layers, mashups etc. that can be used in courses or as part of self-directed (informal) learning. The ultimate objective is to create Edinburgh as the City of Learning.

Question

Question: One of the themes is on student digital literacy and what baseline of literacy should we expect students and staff to have?

Judy R: That’s a really interesting question as we cannot assume that students will know how to use it for learning.

Judy Harding: we need to think about how institutional and personal technologies are used with, perhaps students preferencing their personal technologies.

Dragan: the focus should be on study and learning skills and these will not change but that abilities may decline in these due to the affordances of new technologies.

Dave Reay: confession on start of online course assumed students would know about and be able to use particular technologies. Preparation with students is key.

Sian: the research busting the idea of the digital native. The evidence is that what students come to the university with is less important than what we expect them to do. As many of the talks have suggested, the context is key.

Question: on engaged learning[??]

Judy H: the flipped classroom is important in using the technology to engage with larger cohorts of student as the large lecturer will not disappear.

Question: teach honours and postgraduate students and trying to get students to use newer technologies and if not introduced to these technologies earlier, then it may be too late in learning to use these technologies for learning.

Judy H: do we need to be more explicit in encouraging students to develop relevant technology skills in students.

Dave Reay: this will improve in patches and should be a question for programme convenors to develop online learning experiences in degree programmes.

Dragan: we have academic autonomy and so top-down solutions will not work. We need to consider what technologies academics are aware of and can use and so what incentives are provided to encourage the use of technologies. Suggests greater emphasis and recognition of teaching.

Question: what learning technologies are we developing taking account accessibility and the ethical responsibilities of the university.

Dave Reay: the technologies and online courses increase the accessibility to the programmes to new and different students. Avoids some of the challenges of cost, visas, personal circumstances.

Sian: need to differentiate between learning and education – wanting to learn is different from seeking qualifications via formal education.

Dragan: accreditation is an important factor. Also students don’t just come to edinburgh for the content but also for the experience and networks. Online learning also needs higher development abilities at self-regulated learning. We also tend to think in terms of credit hour rather than outcomes and this can be seen in shifts towards competence based education including graduate attributes.

Question: what practical measures could be taken to keep academic staff up to date with what is happening with learning technologies at schools level

Judy R: CoE does include technology in primary such as using Microsoft office but also extreme paranoia about anything social online and allowing pupils outside the walled garden of eg, GLOW

Judy H: not all out students come through the Scottish education system and we need to encourage self-regulated learning for students coming from a vast range of education systems.

Jonathon S: that would be a goo topic for the conference next year.

 

We’re back from a break with Dash Sekhar, VPAA and Tanya Lubicz-Nawrock from Edinburgh University Students Association on “Co-Creation: Student Ownership of Curriculum”. Starts with the many forms of student engagement such as Kuh’s focus on time and effort aligned to institutional desired outcomes and Bovill emphasises respect, reciprocity and shared responsibility between students and academics.

Co-creation operates on a continuum  from student feedback/ evaluation to students as experts of their own learning experiences expressed through student representations to Co-Creation of the Curriculum. So Co-Creation is a mutuality between students and academics and so shifts power relations between staff and student.

Putting the ideas of co-creation in to action through student-led content where students create their own projects to meet learning outcomes and assessment criteria. Technology allows for more flexible and remote learning.

Student partnerships in assessment: where students select and negotiate the assessment components and weighting to create sense of joint ownership of the assessments. Involved a democratic process for selecting the final assessment process.

Social bookmarking: in a statistics course where as a part of the course, the students had to tag sites and posts related to the course content. These posts were used in a running ‘live feed’. While fairly surface, this involved a shift in how students relate to course content.

We’re now moving to small group discussion so I’ll stop here and be back later. 

Group work over and we’re on to Prof. Ian Pirie, Asst Principal Learning Developments on the use of portfolios and e-portfolios in art & design. Simon Riley (CMVM) will talk about portfolios in medicine. Portfolios are used to demonstrate research, process, methods, outcomes etc. and curate a portfolio for submission for assessment. Portfolios a central to the method of art & design education in the context of sustained practice including art, design, architecture, medicine, engineering, healthcare etc. linked to demonstration of competence.

In the case of art, design & architecture, the portfolio is used from recruitment to almost all assessments. Portfolios include all forms of media and is crucial in entry to the next stages of education and in professional careers.

Simon Riley, on portfolios in medicine. Medical education governed by the GMC as a competency-based curriculum with an interest in allowing student choice.  To enable the student choice element of the curriculum, portfolios were adopted since 1990s.

The university curriculum is closely mapped to the GMC requirements. The different themes of the curriculum is pulled together through the portfolio. Portfolios include case reports, essays, project reports, reflective analysis of professional skills, reflective analysis of experiences, assessment (by viva) and project organisation. The reflective analysis components continue to have room for further development.

There is also a professional development portfolio including capturing the graduate attributes using Pebble+ in parallel to the programme portfolios.

Gives the example of a Group Project that uses an open WordPress site. This involves the collection and synthesis of information and knowledge.

The portfolios are being used for the demonstration of competence and reflection. Portfolios also train students for progression to postgraduate study and professional development. There is a huge amount of commonality between how medicine and art & design use portfolios.

Back to Prof. Ian Pirie on the share pedagogy based on Kolb’s model of experiential learning. In the remaining time, the range of eportfolios being used at Edinburgh are shown. A key issue is transferring the ePortfolio so students can use them outside and after their University forum.

 

Melissa Highton is in the last slot before lunch to talk about Open Educational Resources: new media for learning, and recent developments on OER at Edinburgh.

Openess is seen as a bold and positive move for the University. Initially, the University set up a task group on the development of an OER strategy. OER underpins a lot of the themes of this conference. The task group involved a range of academic and support services stakeholders. Cites the Capetown declaration of 2007 as a fit with stated intentions around sharing and developing knowledge. This sharing of knowledge and learning resources is enabled by technology. But resources need amending to the local context and we’re not sure if this is possible/ legal. There are also strong opinions that publicly funded resources should be open.

A problem with the word ‘open’ is that it means different things: available, available online, accessible. There is a definition of open: “open data and content can be freely used, modified and shared by anyone for any purpose”. There is a need for rigour in the definition in apart to manage the reputational risks of stating that the university is using open resources and that staff understand licensing and sharing and publishing of material. Licensing tends to be on Creative Commons licenses which fits nicely with the notion of teaching as a creative act – and this is a growing phenomena with 882million items on CC license in 2014 from 50m in 2006.

Fourteen countries have made a national commitments to open education including Scotland. CC licensed material is available from all over the world – which would help in internationalising and diversifying the curriculum.

Edinburgh has launched open.ed as open content resources. Also CC licenses allow us to renew and amend any resources so as technologies change, resources can be updated and so are sustainable.

…. and now its time for lunch….and I’ll have to finish here as I’ve run out of power and that plug points don’t work… 

Posted in learning, technology | Tagged , , , , , , , | Leave a comment

Working & teaching in Second Life

Second Life tutorialI am currently enjoying my first extended experience of teaching in (and on) Second Life. Here are a few images of the initial orientation sessions and later tutorials in action at two of our teaching spaces:

Orientation at the beach_001

Posted in learning, technology | Tagged , , | Leave a comment

Context, personalisation and facilitation – new paper to be published

[Update: the paper was published in January and can be found here] In the New Year, a short paper by me is to be included in a special edition of TechTrends to be published in the New Year. The abstract is:

This article explores professional learning through online discussion events as sites of communities of learning. The rise of distributed work places and networked labour coincides with a privileging of individualised professional learning. Alongside this focus on the individual has been a growth in informal online learning communities and networks for professional learning and professional identity development. An example of these learning communities can be seen in the synchronous discussion events held on Twitter. This article examines a sample of these events where the interplay of personal learning and the collaborative components of professional learning and practice are seen, and discusses how facilitation is performed through a distributed assemblage of technologies and the collective of event participants. These Twitter-based events demonstrate competing forces of newer technologies and related practices of social and collaborative learning against a rhetoric of learner autonomy and control found in the advocacy of the personalisation of learning.

I’m looking forward to it coming out – along with other excellent papers from colleagues here.

Posted in research | Tagged , , , , | Leave a comment

Theorising Technology in Digital Education

These are my notes taken during the presentation and then tidied up a bit in terms of spelling and sense – so they may well be limit, partial and mistaken!

Welcome from Sian Bayne with the drama of the day “fire! Toilets!” and confirmed that the event is being livestreamed and the video is available here.
Lesley Gourlay as chair for the day also welcomed participants from across the UK and  Copenhagen. Seeking to provide a forum for a more theorised and critical perspective technology in higher education in the SRHE (Society for Research in Higher Education). Prof Richard Edwards at the School of Education gained funding for international speakers for today’s events. Unfortunately Richard is ill and can’t be here.

The theme of the event is developing the theoretical, ethical, political and social analysis of digital technologies and shift away from an instrumentalist perspective. The event Twitter hashtag is #shre

The first presentation is by Alex Juhasz on distributed online FemTechNet. FemTechNet as a network does not often speak to be field of education so this is a welcome opportunity (she has also blogged on the event here)

FemTechNet is an active network of scholars, technologist and artists interested in technology and feminism. The network is focused on both the history and future of women in technology sectors and practices. FemTechNet is structured through committees and has a deep process-focused approach to its work that is in important in terms of feminist practices. Projects involve the production of a white paper, teaching and teaching practices, workshops, open office hours, co-teaching, etc. models the interaction of theory and practice. But it has been difficult to engage students in collaborative projects while staff/ professors are much more engaged. Town halls are events for collaborative discussion events with an upcoming event on Gamergate to include a teach-in. FemTechNet have also produced a ‘rocking’ manifesto as “feminist academic hacktivism” and “cyberfeminist praxis”.
FemTechNet values are made manifest in Distributed Open Collaborative Courses (DOCCs) themes on Dialogues on Feminism and Technology (2013) and Collaborations in Feminism and Technology (2014). DOCCs against the xMOOC model to promote a participative approach to course design and distributed approaches to collaboration. DOCC was labelled as the Feminist anti-MOOC based on deep feminist principles including wikistorming, and has much press and other interest, some positive and some ‘silly’ (Fox News). FemTechNet has lots of notes on using tools and teaching approaches that can be used across lots of different critical topics beyond feminism alone.
DOCCs are designed to be distributed and with a flatter hierarchy with less of a focus on massiveness. Using technology in an open way to co-create knowledge beyond transmission. More details on the DOCC as a learning commons vs course can be found here.
The FemTechNet commons is now housed and redesigned at the University of Michigan although this may be a way of universities avoiding Title 9 violations. But as a result, the newer commons has become less open and collaborative as an online space.
Much of FemTechNet work involved overcoming technological hurdles and was based on the unpaid work of members. FemTechNet engage with critique of lobour practices and contexts in higher education.
The DOCC networks involve a wide scope of different types of universities from Ivey League and community colleges and community organisations collaborately working.
Student numbers are fairly small with approx 200 students but very high completion rates and very positive feedback and evaluations. Between 2013-4 there was not really growth of scale partly due to limitations of infrastructure. Now with the support of University of Michigan, there is an increased aspiration to develop international collaborative work.
DOCCs involve networking courses from many different fields of study involving both on-campus to fully online courses. Basic components of courses are keynote dialogue videos, smaller keywords dialogues and five shared learning activities. See also the situated knowledge map [link]. There is a big emphasis on share resources, cross-displinarity and inter-institutional working and learning.
So while DOCCs emerged from a feminist network, the tools, models and approaches can be used in many subject areas.

After lunch

Ben Williamson is prsenting on Calculating Academics: theorising the algorithmic organisationan of the digital university. The open slide isof a conceptualisation of a digital university university that can react to data and information that it receives. Ben will be prsenting on a shift t understanding of the university as mediated by the digital and focus on the role of algorithms.
One of the major terms being used is in terms of the smart university based on big data to enhance teaching, engagement, research, enterprise to optimise and utilise the data universities generate. This turn is situation in the wider concept of ‘smart cities’.
Smart cities are ‘fabricated spaces’ that are imaginary and unrealised and perhaps unrealisable. Fabricated spaces serve as models to aspire to realise.
Smart universities are fabricated through
technical devicecs, softre, code,
social actors including software producers, government and
discourses of text and materials.
Algorithm is seen in compsci as a set of processes to produce a desired output. But algorithms are black boxed hidden in IP and impenetrable code. It is also hidden in wider heterogeneous systems involving languages, regulation and law, standards etc.
Also algorithms emerge and change overtime and are, to an extent, out of comtrol, and are complex and emergent.
Socio-algorithmic relationality as algorithms co-constitute social practice (Bucher 2012); generate patterns, order and coordination (mackenzie 2006) and are social products of specific political, social and cultureal contexts and go on to produce by temselves.
Involve translation of human action through mathematical logics (Neyland 2014). Gillespie (2014) argues for a sociological analysis of algorithms as social, poitical as well as technical accomplishments.
Algorithms offer (Gillespie 2014): technical solutions; as synedoche – an abbreviation for a much wider socio-technical system; as stand-in for something else around corporate ownership for example; commitment to procedure as they privilige qualitification and proceduralisation.
Big data and the smart university is a problem area in this idea of the smart university. Is there a different epistemology for big data. Big data cannot exist without algorithms and has generated a number of discourses. Wired mag has suggested that big data is leading to the end of theory as there is no need to create a hypothesis as big data will locate patterns and results and this is a challenge to traditional academic practice. Also there is the rise of commercial social science such as the Facebook social science team often linked to nudging behaviours and “engineering the public” (Tufecki 2014). This is replicated in policy development such as the centre for analysis of social media at Demos using new big data sets. We’re also seeing new academic initiatives such as social physics at MIT and building a predictive model of human behaviour. Also see MIT laboratory for social machines in partnership with Twitter.
This raises the question of what expertise is being harnessed for smarter universities. Points ot the rise of alternative centres of expertise that can conduct big data analysis that are labelled as algorithmist Mayer0Schonberger and Cukier. Such skills and interdisciplinarity does not fit well in university. Sees the rise of non-sociologist sociologists doing better social research?
Mayer0Schonberger and Cukier Learning with Big data – predictive learning analytics, new learning platforms, et.\c. that is reflected in the discourses on the smarter university. Bid data generates the university in immediate and real time- doesn’t have to wait for assessment returns. See for example, IBM education for a smarter planet focused on smarter and prescriptive analytics based on big data.
Knewton talks of inferred student data that suggests the algorithm is objective and consistent. But as Seaver (2014) points out, these algorithms are created and changed through ‘human hands’.
So we’re seeing a big data epistemology that uses statistics that explain and predict human behaviour (Kitchin 2014): algorithms can find patterns where science cannot that you don’t need subject knowledge to understand the data. But he goes on that this is based on fallacies of big data- big data is partial, based on samples, what analysis is selected, what data is or can be captured. Jurgenson (2014) also argues for the understanding of the wider socio-economic networks that create the algorithms – the capture of data points is governed by political choices.
How assumptions of bid=g data are influenceing academic research practices. Increasingly algor entwinned in knowledge production when working with data – sch as Nvivo, SPSS, google scholar – Beer 2012 – algorthimic creation of social knowledge.Also seeing the emergence of digital social research around big data and social media. eg social software studies initiative – soc sci increasingy dep on digital infrrastructure not of our making.
Noortje Marres rethink social research as distributed and share accomplishment involving human and non-human.
In turn influences academic self-assessment and identity through snowball metrics on citation scores, researchfish etc. translating academic work in to metrics. See Eysenback (2011) study linking Tweets and rates of citation. So academics are subject to increasing quantified control mediated through software and algorithms. Seeing the emergence of the quantified academi self. Yet academics are socialised and by these social media networks that exacerbtes this e-surviellance (Lupton 2014). While share research develops its own lib=vely social life outside of the originator’s control.
Hall (2013) points to new epistemic environment that academics are being more social (media) entrepreneurial. Lyotard (1979) points to the importance and constraints of computerisation of research.
Finish with Q
– how do cog based classrooms learn?
–  what data is collected to teach?
– should academics learn to code?

A lot of discssion on the last question. It was also pointed out that its not asked should coders learn to be sociologists?
Also pointed out that people demonstrate the importanc of embodoed experiences through protests, demonstrations, that reflects what is loss in the turn to data.

After a short break, we now have Norm Friesen on Education Technology or Education as always-already Technological”. Talking about educational technology as not new but as going through a series of entwinements over time. Norm will look at older technologies of the text book and the lecture as looking back at older recognisable forms.
Looking back we can argue that educational technologies now are not presenting particularly novel problems for higher education. Rather higher education has always been constituative with educational practices then we can see how practices can adapt to newer technologies now.
Tech in education have always been about inscription, symbols as well as performance. If we understand the university as a discourse networks – see Kipler’s discourse network in analysis of publishing in 19 Century. Institutions like universities are closely linked to technology in storing and using technologies and modifying technologies for their practices.
In the example of tablets going back to ancient times or the horn book or other forms that is tigtly couple with institutions of learning and education. Such as clay tablets dating back to 2500 – 2000 BCE that show student work and teacher corrects as symbolic inscriptions of teaching and learning practices. And such tablets work at the scale of individual student work or as larger epic literatures. Can see a continued institution symbolic practices through to the iPad. Here technologies may include epistemic technologies such as knowledge of multiplication tables, procedures of a lecture – technologies as a means ot an end – so technologies are ‘cultural techniques’.

For the rest of the presentation will focus on the textbook and lecture as technologies that are particularly under attack in the revisioning of the university. Ideas of the fipped classroom still priviliges the lecture through video capture. Similarly the text book has yet to be overtaken by the e-textbook. Both provide continuities fromover 800 years of practice and performance.
The lecture goes back to the earliest university as originally to recite a text, so for transmission rather than generation of knowledge with a focus on the retention of knowledge. Developing ones own ideas in a lecture was unknown and student work involved extensive note taking from oral teaching (see Blair 2008). The lecture is about textual reproduction. Even following the printing press, this lecture practice continued although slowly, the lecturers own commentary on the text was introduced manifested as interlines between lines written from the dictated text. Educational practice tended to  not change as rapidly as the technologies of printing such that education was about 100 years behind.
But in 1800  saw the first lectures only from the lecturers own notes. so the lecture was recast around the individual as the creator of knowledge. So the individual lecturer and student not the official text became the authoritative sources of knowledge. Also the notion of the performance becomes increasingly important in the procedure of the lecture.
In textbooks we see pedagogical practice embedded in the text as end of chapter questions for the student to reflect and respond to (the Pestalozzian method, 1863). This approach can be seen in Vygotsky, Mead and self-regulated learning.
Specific technological configurations supported the increased emphasis on performance such as podcasting, powerPoint, projectors, etc. (see TED talks).
In the text book, similar innovations are happening in term sof layout, multimedia, personalised questioning (using algorithms). The text book becomes an interactional experience but continue from much older forms of the textbook. What is central is what are the familiar forms – that underlying structures have persisted.

But it is also the case that lectures nolonger espouse their own theories, they do not create new knowedge in the lecture.

Posted in research, technology | Tagged , , , , , | Leave a comment

Making & Breaking Rules in IT Rich Environments

These are my notes taken during the presentation and then tidied up a bit in terms of spelling and sense – so they may well be limit, partial and mistaken!

Prof Kalle Lyytinen, Case Western Reserve University.

The welcome came from Robin Williams noting that Kalle has a wide range of  appointments and  research interests and often acts as abridge builder across different subject disciplines and between American and European research communities. Kalle has been particularly supportive around research in IT infrastructures and in supporting the development of research communities on IT infrastructure.

Kalle starts the presentation with a discussion of the background of this paper that has been developing over the last five years. His research is positioned within science and technology studies (STS) but with a more behaviourist focus. This paper investigates issues of regulation which is fundamental to social interactions through establishing what is and is not acceptable behaviour within a specific context.

The example of the Securite Generale fraud by Jerome Kerviel who fooled the control systems to undertake fraudulent trading resulting in losses for the bank of approximately €5bn. This fraud was contrasted the old fashioned approaches to bank robbery and the regulatory regimes aimed at preventing such robberies to highlight that digital banking require new and different regulatory regimes.

IT systems embed rules that have regulatory functions on access to and the use of resources. Yet a key concern remains with how social actors comply with and work around these rules. So this research is concerned with how IT can be seen as materially based organisational regulation in interaction with the social.

What is a rule? Rules tend to be defined as a purely social statement on the expectations on behaviours by participants in a system and it is assumed that such rules are generally reciprocal. The expectations should create stabilities of behaviour yet are not mechanistic and so variances occur through misunderstanding, reinterpretation and resistance. For organisations, what is key is the materiality of rules through systems, processes, expressions in space design and so forth, that also generate stability over space and time. Regulation combines social and material components intertwined in a practice that decrease variance in behaviours and also facilitate the coordination of collective action.

Regulation is a meeting point of tensions between structure and agency raising questions on, for example, centralisation vs decentralisation of decision-making.

An IT system is a dynamic and expansive resource through which regulatory power is exercised by materialisation of rules. Rules are stored, diffused, enforced through IT. Rules encode and embed rules (Latour 1996, 2005) while rules become more complex through IT systems that allow complex combinations of rules. IT can track, record and identify events on a large scale and high speed and low cost – which is where big data can help identify and enforce new rules. Through IT, regulation becomes less visible as it is embedded in, for example, user-interfaces.

The example of high frequency trading and how IT rules are established that limit what types of trades can be operationalised – see Lewis’ Flashboys book.

Regulation has three dimensions: 1. the Rules that are materialised as a 2. IT artefact that is interdependent on 3. practices. Rules are coupled overtime with practices (such that the rule may be forgotten as it is embedded in the IT artefact.

IT regulation research in 1970s to 90s viewed regulation as oppressive and deterministic and in 1990s+ research was more concerned with deviation in practice. Alot of research in regulation positioned IT as a contextual variable while a much smaller number looked specifically at the IT in terms of materialisation, enactment of rules in practices and in the temporal aspects (Leonardi 2011). So research on IT and Regulation is limited.

Research to focus on sources of co-existence of multple IT based regulations generating heterogeneous and conflicting regulations so has multiple consequences.

Our focus is on practices of maintaining and transforming rules that mediate collective activity. Regulations are based on three types of origins: (i) autonomy where people agree on behaviours; (ii) control-orientated, explicit rules and laws based; or (iii) joint. The research is interested in practices in IT rich environments as rules become more invisible as they are ‘inscripted’ in to technology and/ or material. The same rule can be embedded in different ways, eg, speeding rules embedded in speed bumps and/ or in vocal warning from speedometer.

The study was a 7 year longitudinal study of regulatory episodes in a virtual learning environments. How teaching and learning behaviours are regulated through the VLE. Data was gathered from email logs, interviews and document analysis. The analysis focused on critical incidents, simple statistics and lexical analysis of emails.

The research questions were: 1. what is the focus of the regulatory episodes and 2. what was the temporal coupling between regulation and behaviour. The VLE provides a rich environment with alternative forms of regulation, dynamic in terms of wider changes in higher education, rules embedded in the application and how it is used.

Five types of regulatory episodes, all of which changed over time:

1. functional – restrictions on how users use the VLE based on the functionality of the VL

2. Tool orientated – specific tools are imposed for specific activities

3. Role orientated – which roles can use which aspects of the VLE

4. Procedure orientated – where learning processes such as course activities are practiced in new ways

5. Opportunity orientated.

Material regulation is dominant in functional and tool orientated rules while the social was dominant in role and procedure orientated rules.

The complexity of the multiplicity of rules and sources of rules led to confusion and difficulties in enforcing rules but, with low levels of constraint, were also sources of innovation in practices. Also, increasing the formal limits of the IT systems generated conflict over the rules.

As the operationalisation of the VLE continued over time so the complexity and volume of rules increased.

Over time the central administration of the university asserted increased control over the VLE for purposes of efficiency and uniformity of provision but also to legitimise its existence. But this increased control also removed a lot of local innovations. The materialisation of the rules in the VLE enabled greater centralised control. But also that IT choices then limits what future flexibility may be possible.

 

 

Posted in research, technology | Tagged , , , , | Leave a comment

Facebook network

A sociogram of my Facebook networkI am currently trying to catch up on the Coursera MOOC on social network analysis . My main aim in taking the course is to force myself to learn about using Gephi for network analysis. The course so far has been clear and well presented but its early stages. Also, using Gephi on the Mavericks version of OSX has been a pain largely due to Java as Gephi won’t run on the default install of Java. The solution can be found on the Gelphi forums here although I’m still having some problems with Java.

I don’t use Facebook much and was a bit surprised at the density of the network as a whole but having that number of sub-clusters was less surprising considering the stop-start nature of how the network developed. I’ll have to find out who the single unconnected nodes are once the Java issues have been resolved.

Posted in technology, Uncategorized | Tagged , , , , , , | Leave a comment

weeknotes [20102014]

Over the last few weeks, I’ve been

further working through my research involving discourse analysis along with network and other sociomaterial methods for my PhD. I think I’m developing a stronger understanding of of the method “in action” and Technology Enhanced Learning.

I’m also continuing to enjoy the teaching on two courses: Digital Environments for Learning; and Course Design for Digital Environments.

I’m also continuing to contribute to the development of two initiatives which I’ll hopefully write about sometime soon.

Posted in learning, research, work | Tagged , , , , , , , | Leave a comment

What is wrong with ‘Technology Enhanced Learning’

Last Friday I attended a Digital Cultures & Education research group presentation by Sian Bayne on her recent article What’s the matter with ‘Technology Enhanced Learning’?

These are my notes taken during the presentation and then tidied up later – so they may well be limit, partial and mistaken!

16th_century_French_cypher_machine_in_the_shape_of_a_book_with_arms_of_Henri_II.

16th century French cypher machine in the shape of a book with arms of Henri II. Image from Uploadalt

While Technology Enhanced Learning (TEL) is a widely used term in the UK and Europe, the presentation positions TEL as an essentially conservative term that discursively limits what we do as researchers and researchers in the field of digital education and learning. Sian’s critique draws on three theoretical perspectives:

* Science & Technology Studies (STS) for a critique of ‘Technology
* Critical posthumanism for a critique of ‘Enhancement
* Gert Biesta’s language of learning for ‘Learning

For Technology, we dont tend to define it but rather black box it as unproblematically in service to teaching practices. This black-boxing of technology as supporting learning and teaching creates a barrier between the technology and the social practices of teaching. As Hamilton & Friesen discuss, two main perspectives on technology as either as an essentialist perspective of unalienable qualities of the technologies or we treat it instrumentally as a neutral set of tools. I both cases technology is understood as being independent of the social context in which it is used. Hamilton & Friesen argue we need to take a more critical stance especially in terms of technology as the operationalisation of values and to engage in larger issues such as social justice, the speed of change and globalisation, the nature of learning or what it is to be human.

By using the term, Enhanced, TEL adopts a conservative discourse as it assumes there is no need to radically rethink teaching & learning practices but just a need to enhance of tinker with existing practice. So enhancement aligns with Transhumanism – a humanist philosophy of rationality and human perfectibility where technological advances remove the limitations of being human (Bostrom 2005)
Critical post-humanism (Simon 2003) is a philosophical critique of the humanism of the Enlightenment and its assumptions on human nature and the emphasis on human rationality. arguing that these assumptions are complicit in dominatory practices of opporession and control. The human being is just one component in complex ecology of practice that also includes machines, non-human components in symmetry. So post-humanism is more about humility and appreciation of that our involvement as humans in our context is complex and inter-related and interactional. Yet TEL buys into a dominant Transhumanism emphasising the cognitive enhancement of the mind and so could include the use of drugs as a ’technology’ to enhance learning. The Technology Enhanced Learning System Upgrade report.
Transhumanism positions technology as an object acted on by human subject so ignoring how humans are shaped by and shape technology and does not ask Is ‘enhancement’ good, who benefits from enhancement and is enhancing is context specific? It is argued that TEL could learn from the post humanist critique of Transhumanism

The ‘problem’ of Learning draws on Gert Biesta’s writing on the new language of learning and more specifically, the ‘learnification’ of discourses of education. This involves talking about “learning” rather than “teaching”, or “education”. Learning as a terms is used as a proxy for education that takes discussions away from considerations of structures of power in education itself. So learnification discursively instrumentalises education – education is provided/ delivered to learners based on predefined needs rather than needs emerging and evolving over time. So learners are positioned as customers or clients of education ‘providers’ and TEL gets bound up with this neo-liberal discourse/ perspective

So the label of TEL tacitly subordinates social practice to technology while also ontologically separating the human from the non-human. The TEL discourse is aligned with broader enhancement discourse that enrols transhumanism and instrumentalisation so entrenching a particular view of the relationships between education, learning and technology.

Rather, education technologies involve complex assemblages of human and non-human components and as practitioners and researcher, we need to embrace that complexity. Posthumanism as a stance, is a way of doing this and understanding learning as an emergent property of complex and fluid networks of human and non-human elements coming together. In posthumanism, the human is not an essence but rather a moment.

Posted in learning, research, technology | Tagged , , , , , | Leave a comment

Distributed governance of technological innovation through the case of WiBro in S. Korea.

I attended the Social Informatics Cluster meting to hear Jee Hyun Suh present on: Co evolution of an emerging mobile technology and mobile services: distributed governance of technological innovation through the case of WiBro in S. Korea. These are rough notes taken during the presentation.

She presented the story of WiBro and the implications for the governance of large scale technological innovations for technology companies and government. WiBro was initiated from 2001 as a national R&D programme for high speed portable internet, it was harmonised with national and international standards (WiMax) and went to a commercial launch in 2006. It is widely seen as a case of market failure despite a successful technological innovation.

The research objectives were initially to examine the socio-technical factors in the development of the technology and the gap between the visions and outcomes of the technology commercialisation and explore the governance of large scale and complex innovations. The technology’s development was interpreted through social learning processes with a particular focus on building alignments between the technology, service evolution, standardisation and social learning within a wider development arena of R&D.

Over the course of the research period, 2001 to date, the focus of interest shifts from design & development of the technologies to a commercial focus on then on to a focus of the service evolution. The WiBro development was linked to broader policy imperatives of positioning S.Korea as innovation leader.

The technology itself was predicated on a problematisation of the inefficient use of 2.3 GHz and then enrolment of stakeholders to co-shape a generic vision of the using bandwidth portable internet service. This became co-evolved with drive towards a High performance portable internet and processes of standardisation.Standard setting closely linked to bandwidth/ spectrum allocation. Became conceived as a seamlessly interlinked innovation process. but different interests and objectives across stakeholders remained unresolved especially between focus on tech dev vs commercial exploitation through existing technologies. Also shifting alignments around adoption of differing international standards. The technology had been successfully developed and as pre-commercial produce was show cased at APEC 2005.
Commercialisation occured around processes of spectrum licensing. Again, different visions for WiBro, eg as an extension of fixed line services, as a differentiated service and as a complementary service to existing mobile networks. These different visions were rolled into different commercial aims eg, early market advantage vs emphasis on interoperability, adoption or blocking of VoIP as well as the emergence of 3G services. The later development of 4G mobile resulted in shifts to the vision of WiBro and how it should evolve.
Also, the commercial focus bifurcated on domestic versus a global market focus. In the domestic market, there could be seen the dynamics of trail and error on finding niche markets for WiBro, eg, mobile routers, digital shipyards, WiBro-Taxi. This market learning processes occurred despite tensions between players and their visions for the service.
The argument presented was that the ‘problem’ of WiBro should be framed in terms of uncertainties in innovation processes rather than in terms of a failure in diffusion/ commercialisation. So the coordination challenges and dispersed arenas of innovation enabled key players to interact in the social shaping of this particular technology highlighting the importance of stakeholder reflexivity and flexibility in large-scale technological innovations.
It was also noted during the Q&A that WiBro coincided with the testing and general failure of attempts at developing national technology champions that could then be exported in to global markets.

For more on social learning processes in innovation diffusion, see:

Posted in technology, work | Tagged , , | Leave a comment

weeknotes [21092014]

OK, what have I been up to over the last few weeks:

Well the supervision of dissertation students has given way to the marking of dissertations. I can’t say I enjoy marking the dissertations I supervised (and am very glad they’re double marked) but do find interesting reading the dissertations that I haven’t supervised for the first time. … and just in case I thought there would be a pause, I’ve already started the first supervision meetings for a new set of dissertations.

piloting discourse analysis for my PhD studies continues to develop as issues are surfaced and I develop a better understanding of the method “in action”.

The writing of a couple of papers for publications continues. One is near completion and just requires final copy-proofing and permissions on images etc before submission. The other required extensive rewriting (and re-reading it, I did find it a shockingly poor piece of work – writing a short paper seems to be much harder…) and I’m waiting for feedback on that new version.

attended and excellent seminar on Unbundling the University. I hope to return to this topic in the near(ish) future. Interestingly, the imperatives for unbundling appear to be coming to the state schools system in the UK (or at least England) with this example of outsourcing school services involving the Academy Enterprise Trust.

Also, we’re now well and truly into the teaching term with the two courses I’m contributing to this semester: Digital Environments for Learning; and Course Design for Digital Environments

 

Posted in work | Tagged , , , , , | Leave a comment