Category Archives: research

Creating Living Knowledge: the Connected Communities programme and what it tells us about university-community partnership

These are my notes from a Digital Education Seminar at the University of Edinburgh by Professor Keri Facer on the Connected Communities research programme.

As ever with these posts, my record is partial and bias and possibly includes some inaccuracies (but not on purpose). 

The seminar was opened by Prof Sian Bayne to introduce Keri as Professor of Educational and Social Futures at Bristol University and was previously Research Director at FutureLab. Her research takes a critical stance on digital education and on the role of educational institutions in society. Today she’ll be talking about her work onConnected Communities and the newly released Creating Living Knowledge report on lessons learnt from the Connected Communities programme.
Keri Facer:
The main questions that will be explored today include: what is Connected Communities and what is shaping university-community partnership, what they are creating and the implications for the future trajectories of universities and their interface with their communities?
CC is a research council UK programme led by AHRC and currently funds 324 different projects. Projects range from 6 months to five years and involving working with external organisations from creative economy, environment, health and well-being etc…
The bigger picture of the programme is to address question of how university and community knowledge be combined to generate better research. Underpinned by the assumption that co-produced research is a ‘good thing’. The RCs are making huge claims on the potential for the co-production mode of research in terms of research quality and impact while others are concerned that this agenda is concerned with the instrumentalisation and marketisation of research.
CC enters a massively uneven playing field between large institutions through to voluntary community activists, freelancers, community organisations, etc. The HE sector is also very diversified between research/ teaching intensive interacting with socio-cultural diversity. Also, CC works with a wide diversity of motivation for engaging with research: generalists and learners engaged by interdisciplinary research; makers wanting to make something happen; scholars with a particular topic orientation; entrepreneurs interested in funding available; accidental wanderers caught up in projects; advocates for new knowledge landscape arguing for a rethink of how knowledge is generated.
The are also different research traditions in:
– participatory, collaborative, community engaged research developing grass-roots capacity
– development traditions – changing policy
– people’s history, feminist and civil rights interested in alternative narratives of history
– innovative co-design changing services and products
– open/ crowd and open innovation creating something new
– participatory arts where unsettling and exploration is the purpose.
These different traditions mobilise different performances of community and ‘publicness’. Also involving different participants and audiences and different working practices. Again, these shape the landscape of collab
Social networks and funding. Raises questions of access to social networks and how and where conversations happen. Over 50% of partners had already worked inside universities. So other possible partners face a barrier to entry to these collaborative opportunities while intensive workshops can discriminate with caring responsibilities.
So the injunction to co-produced research can reproduce and intensify existing inequalities.
Important to acknowledge that the cultures of universities can be very diverse and not only a culture of critique, e.g., engineers want to make stuff
Different groups want different things from one another: from practical help, personal value and friendships and symbolic benefits e.g., of offering authenticity and credibility and status. Everyone has to negotiate the ‘fantasy’ of the university and the community. Beyond the quick gains between partners leads to difficult questions around, e.g., the legitimacy of knowledge production or the representativeness of community groups.
Different modes of collaboration emerge:
  1. division of labour – keep to our own silos
  2. relational expertise – can we see the issue through each others eyes
  3. remake identities – about learning each others skills and knowledge so we can take on each others’ roles.
  4. colonisation – unsettled identities but no learning. Academics attempting community work or community groups attempting research data collection.
Where works well, collaboration leads to the breakdown of division and new roles are mobilised such as catalysers; integrators; designers; broker; facilitator; project managers; data gatherers; diplomats (makes things work in and between institutions); accountants; conscience; nurturer; loudhailer.
This requires time to develop trust; understand each others’ expertise, etc. so that these projects can do a different sort of work where  “The adventure of thought meets the adventure of action” (A.N Whitehead)
While their is strong legacy from these collaborations, this legacy is precarious due to key staff being junior staff and in precarious employment. This is linked to the funding environment. Short-term project funding can disturb the work of small organisations as well as disturb personal relationships. Also, the funding requires working with HEI systems that are not fit for working with smaller and precarious partner organisations. These negative effects are exacerbated by trends in HE towards marketisation
We cannot state whether such projects will democratise knowledge production as that depends on many other variables. Similarly, the idea that co-production leads to better research – well, its another set of methods but collaboration can, if done mindfully, lead to better quality research in terms of needs of all of those involved.
Recommendations from the research (from the report):
  1. improve the infrastructure
  2. recognise the need for time for collaboration
  3. explicitly address the risk of enhancing inequalities
  4. invest in and support civic society’s public learning infrastructure.

PhD Abstract: Twitter chat events & the making of a professional domain

Here is the latest draft of a one page abstract of my PhD:

Distributed online discussion events in social media are increasingly used as sites for open, informal professional development, knowledge sharing and community formation. Synchronous chat events hosted on Twitter have become particularly prominent in a number of professional domains. Yet theoretical and critical analysis of these Twitter chat events has, to date, been limited: this thesis contributes to the development of such analysis through a socio-material, network assemblage lens employing trans-disciplinary and multi-method research approaches. This research positions the Twitter chat events as the relational effects of network-assemblages of human and non-human actants.A picture of various draft word processed documents

This thesis explores Twitter chat events with a particular focus on human resource development (HRD) as a professional domain that is widely seen as inherently changeable, fluid, contested and continually emergent. This study examines how practitioner-generated reportage of professional practice and the specific functions of Twitter intra-act to generate a particular definition of HRD as a professional field of practice.

A combination of descriptive statistics, Social Network Analysis and analysis of the content and structure of the Chat events has been employed in researching 32 separate chat events with 12,061 tweets. The research methods generated multiple readings of the research data and surfaced different and fluid potential lines of enquiry in to the Twitter chat events. A number of these potential lines of enquiry were then selected as points of entry to ‘zoom in’ to the data using a critical discourse analysis for a smaller sample of the Chat events.

A key finding of the research is that the Twitter chat events seek to generate an idealised archetype of HRD bounded by a stable set of dominant practices. This idealised archetype is positioned in contrast to a repertoire of common HRD practices presented as illegitimate in this professional grouping. A second key finding relates to the chat event assemblages as collective achievements involving human and non-human actants. The collective effects surfaced in the research problematise (a) the notion of online communities as the product of network ties and (b) the individualist orientations of much of the literature on professional learning.

It is further argued here that the entanglement of the particular technologies and functions of Twitter and the discursive structures and strategies mobilised in the Chat events creates tensions between discursive territorialisation and stabilisation of particular discourses of professional identity and meaning-making and the deterritorialisation, fragmentation and fluidity unscripted in to Twitter itself.

Line manager role identity as facilitators of learning

 

IT Futures at Edinburgh

I’m attending the IT Futures conference at Edinburgh today. These notes are not intended to be a comprehensive record of the conference but to highlight points of interest to me and so will be subjective and partial.

A full recoding of the conference will be available at the IT Futures website

The conference opens with an address from the Principal, Sir Timothy O’Shea with an opening perspective:

Points to the strengths of the University in computing research, super-computing and so on, and ‘ludicrously’ strong in e-learning with 60 plus online postgraduate programmes. In these areas, our main competitors are in the US rather than the UK.

Beginning with a history of computing from the 19402 onwards. Points to Smallwood and using computers to self-improving teaching and Papert on computing/ e-learning for self-expression. 1980s/90s digital education was dominated by the OU. 1990s the rise of online collaborative learning was an unexpected development that addressed the criticisms that e-learning (computer assisted learning) lacked interactive/ personalisation elements.

2000 saw the rise of OERs and MOOCs as a form of providing learning structure around OERs. Also noted the success of OLPC in Uruguay as one of the few countries to effectively implement OLPC.

Argues that the expansion of digital education has been pushed by technological change rather than pedagogical innovation. We still refer to the constructivism of Vygotsky while technology innovation has been massive.

How big is a MOOC?
– 100 MOOCs is about the equivalent in study hours of a BA Hons. A MOOC is made up of a 1000 minnows (I think this means small units of learning. MOOCs are good for access as tasters and to test e-learning propositions. They also contribute to the development of other learning initiatives, enhance the institutional reputations including relevance through ‘real-time MOOCs’ such as on the Scottish referendum. MOOCs provide a resource for learning analytics.

So e-learning is mature, not new, and blended learning is ‘the new normal’ and dominated by the leading university brands of MIT, Stanford, etc. A huge contribution of e-learning is access.

A research agenda: to include modelling individual learning, including predictive learning support; speed of feedback; effective visualisation; supporting collaboration; understanding Natural Language; location of the hybrid boundary (eg, in practical tests); personal programming (coding) and how realistic is it for meaningful coding skills for the non-geeks to  be developed.

Open questions are around data integrity and ownership; issues of digital curation; integration of data sources; who owns the analysis; should all researchers be programmers?; and how to implement the concept of the learner as researcher?

Questions:

Question about artificial intelligence: Answer – Tim O’Shea’s initial research interest was in developing programmes that would teach intelligently – self-improving teachers – but using AI was too difficult and switched towards MIT’s focus on self-expression and for programmers to understand what their codes were doing. Still thinks the AI route is too difficult to apply to educational systems.

Q: surprised by an absence of gaming for learning?

A: clearly they can and cites Stanford on influence of games on learning motivation

Q: on academic credit and MOOCs

A: Thinks this is inevitable and points to Arizona State University which is attempting to develop a full degree through MOOCs. Can see inclusion of MOOCs in particular postgraduate programmes – heuristic of about a third of a Masters delivered via (external) MOOCs but more likely to be taken forward by more vocational universities in the UK – but using MIT or Stanford MOOCs replacing staff!.

Now moving on to Susan Halford on ‘Knowing Social Worlds in the Digital Revolution’:

Researches organisational change and work and digital innovation. Has not directly researched changes in academic work but has experienced them through digital innovation. Digital innovation has kick-started a revolution in  research through data volume, tracking, analyse and visualise all sorts of data. So data becomes no longer used to research something but is the object of social research.

Digital traces may tell us lots about how people live, live together, politics, attitudes, etc. Data capturing social activities in real time and over time rather than replying on reporting of activities in interviews, surveys and so. At least, that is the promise and there are a set of challenges to be addressed to realise the potential of these data (also see this paper from Prof Halford).

Three key challenges: definition; methods and interdisciplinarity

Definition–  what are these digital data: these are not naturally occurring and do not provide a telescope to social reality. Digital data is generated through mediation by technology and so is not naturally occurring. In the case of Twitter, a huge amount of data, but is mediated by technological infrastructure that packages the data. The world is, therefore, presented according to the categories of the software – interesting but not naturally-occurring data. Also, social media generate particular behaviours and are not simply mirrors of independent social behaviour – gives the example of the ReTweet.

Also, there is the issue of prominence and ownership of data. Survey data often is transparent in the methods used to generate data and therefore, the limits of the claims that can be made from the data. But social media data is not transparent in how it is generated – the data is privately owned where data categories and data stream construction is not transparent. We know that there is a difference between official and unofficial data. We do not know what Twitter is doing with its data but that it is part of an emerging data economy. So this data is not neutral and is the product of a series of technological and social decision-making that shapes the data. We need to understand the socio-technical infrastructure that created them.

Method – the idea that in big data, the numbers speak for themselves is wrong: numbers are interpreted. The methods we have are not good for analysis of large data. Research tends towards small scale content analysis or large scale social network analysis but neither are particularly effective at understanding the emergence of the social over time – to harness the dynamic nature of the data. A lot of big data research on Twitter is limited to mathematical structures and data mining (and is a-theoretical)  but is weak on the social aspects of social media data.

Built a tool and Southampton to dynamically map data flows through ReTweeting.

Interdisciplinariety: but is a challenge to operationalise inter-disciplinarity.

Disciplines imagine their object of study in (very) different ways and with different forms of cultural capital (what is the knowledge that counts – ontological and epistemological differences). So the development of interdisciplinarity involves changes on both sides – researchers need to understand programming and computer scientists need to understand social theory. But also need to recognise that some areas cannot be reconciled.

Interdisciplinarity leads to questions of power-relations in academia that need to be addressed and challenged for inter-disciplinarity to work.

But this work is exciting and promising as a field in formation. But also rises for responsibilities: ethical responsibilities involved in representing social groups and societies and data analytics; recognising digital data excludes those who are not digitally connected; data alone is inadequate as social change involves politics and power.

Now Sian Bayne is responding to Prof Halford’s talk: welcomes the socio technical perspective taken and points to a recent paper: “The moral character of cryptographic work” as  generating interest across technical and social scientists.

Welcomes the emphasis of interdisciplinarity while recognising the dangers of disciplinary imperialism.

Questions:

What actions can be taken to support interdisciplinarity?

A: share resources and shared commitments are important. Also academic structures are important and refers to the REF structures against people submitting against multiple subjects. (but is is pointed out that joint submissions are possible).

Time for a break ….

 

We’re back with Bernard Schafer of the School of Law talking on the legal issues of automated databases. Partly this is drawn from a PG course on the legal issues of robotics.

The main reference on the regulation of robots is Terminator but this is less worrying than Short Circuit, eg, when the robot reads a book, does it create a copy of it, does the licence allow the mining of the data of the book, etc. See the Qentis hoax. UK is the only country to recognise copyright ownership of automatically generated works/ outputs but this can be problematic for research, can we use this data for research?

If information wants freedom, does current copyright and legal frameworks support and enable research, teaching, innovation, etc? Similar issues arose form the industrial revolution.

Robotics replacing labour – initially labour but now examples of the use of robots in teaching at all levels.

But can we automate the dull part of academic jobs. But this creates some interesting legal questions, ie, in Germany giving a mark is an administrative act similar to a police caution and is subject to judicial review, can a robot undertake an administrative act in this way?

Lots of interesting examples of automatic education and teaching digital services:Screen Shot 2015-12-17 at 12.10.02

 

 

 

 

Good question for copyright law is what does ‘creativity’ mean in a world share with automatons? For example, when does a computer shift from thinking to expressing an idea which is fundamental to copyright law?

Final key question is: “Is our legal system ready for automated generation and re-use of research?”

Now its Peter Murray-Rust on academic publishing and demonstrating text or content mining of chemistry texts.

…And that’s me for the day as I’m being dragged off to other commitments.

The Twitter Experience

For all the structuring effects of the Twitter functional features, the Twitter experience is generally perceived as a private one as only the individual user can see their Twitter feed, as they have structured it, on their particular screen configuration (Gillen and Merchant 2013). This aspect of the individualisation and heterogeneity of public and open textual communication adds to the complexities of interpreting, analysing and making sense of Twitter. Gillen and Merchant’s (2013) discussion of the capacity of Twitter users to organise the flow of discourses they are presented seems to ignore both the algorithmic impositions of, for example, Trending terms in that interface as well as the effects of the content of individual Tweets being perceived as a coherent informational flow or a chaotic mess of impressions (or both). The Twitter user experience is not an isolated or individualised one but is, rather, an entanglement of heterogeneous intentions, business logics, coded protocols, algorithmic outputs, collective norms and individual perceptions.

It is this entanglement between the human and material that opens, closes and patterns or orders the particular uses of Twitter. Twitter is constantly and actively made and remade in the intra-actions of user behaviours, hardware, coding, algorithms and visual design, rather than Twitter being a neutral utility or passive instrument.

ReCon, Research in the 21st Century: Data, Analytics and Impact

So here we are at ReCon, Research in the 21st Century: Data, Analytics and Impact at the University of Edinburgh’s Business School. I’ll be taking notes here throughout the day but these will be partial and picking up main points of interest to me.

The conference is opening with Jo Young from the Scientific Editing Co giving the welcome and introduction to the event.

The first session is from Scott Edmunds from GigaScience on “Beyond Paper”. Has the 350 year old practices of academic publishing had its day and is the advertising of scholarship & formulated around academic clickbait. Taken to extremes, we can see the use of bribery around impact factors, writing papers to order, guaranteed publications etc. This has led to an increase in retractions (x15 in the last decade) so that by 2045 as many paper will be retracted as published and then we’re into negative publishing.
We need to think of new systems of incentives and we now have the infrastructure to do this especially data publishing such as Giga Science provide.
Giga Science has own data publishing repository as well as an open access journal with open and transparent review process. Open data and data publishing is not new and was how Darwin worked through depositing collections in museums and publishing descriptions of finds before the analysis that led to Origin of the Species.
Open data has a moral imperative regarding data on natural disasters, disease outbreaks and so forth. Releasing data leads to sharing of data and analysis of that data for examples on Ecoli Genome analysis. Traditional academic outputs were created but it is also used as an example of the impact of open data. See the Royal Society report here. The crowd sourced approach to genome sequencing is being used in, eg, Ebola, in rice genomes addressing the global food crisis. But publishing of analysis remains slow and needs to be closer to realtime publishing.
So we’re now interesting in executable data looking at the research cycle of interacting data and analysis leading to publications at micro and nano publications that retain DOIs. Alot of this is collected on GitHub.
Also looking at the sharing of workflows using the Galaxy system and again, giving DOIs to particular workflows (see GigaGalaxy), sharing virtual machines (via Amazon).
Through analysis of published papers found how rates of errors but also that replication was very costly.
So the call is “death to the publication, long live the research object” to rewards replication rather than scholarly advertising.

Question: how is the quality of the data assured?
Journal publications are peered reviewed and do checks using own data scientists. While open data is not checked. Tools are available and being developed that will help improve this.

Now on to Arfon Smith from GitHub on Predicting the future of Publishing. Looking at open source software communities for ideas that could inform academic publishing. GitHub is a solution to the issues of version control for collaboration using Git technology. People use GitHub for different things: from single files, through to massive software projects involving 7m + lines of codes. There are about 24m projects on GitHub and is often used by academics.
Will be talking about the publication of software and data rather than papers. Assumptions for the talk are: 1. open is the new normal; 2. the PDF is increasingly unsatisfactory way of sharing research; and 3. we are unprepared to share data and software in useful ways.
GitHub especially being used in data intensive sciences. There is the argument that we are moving in to a new paradigm of sciences beyond computational data in to data intensive sciences (data abundance) & Big Science.
Big Science requires new tools, ways of working and ways of publishing research. But as we become more data intensive, reproducibility declines under traditional publishing. In the biosciences, many methods are black boxed and so it is difficult to really understand the findings – which is not good!
To help, GitHub have a guide on how to cite code by giving a GitHub repository a DOI (via Zenodo) for academics.
From open source practices that are most applicable are:
1. rapid verification, eg, through verification of pull-requests where the community and 3rd party providers undertaking testing or using metrics that check the quality of the code, eg, Code Climate. So verification can and should be automated and open source is “reproducible by necessity”. So in academia we can see the rise of benchmarking services – see for example, Recast or benchmarking algorithm performance.
2. innovation in where there are data challenges by drawing on a culture of reuse around data products to filter out noise in research to enable focus on the specific phenomena of interest (by elimination by data from other analysis)
3. Normal citations are not sufficient for software. Academic environments do not reward tool builders. So there is an idea of distributing credit to authors, tools, data and previous papers. So makes the credit tree transparent and comprehensive.
These innovations depend on the forming of communities around challenges and/ or where open data is available.
Screen Shot 2015-06-19 at 10.38.51
The open software community have developed a number of solutions for the challenges faced in academic publishing.

Now we’ve moved on to Stephanie Dawson, CEO, ScienceOpen on “The Big Picture: Open Access content aggregators as drivers of impact” – which is framed in terms of information overload which is a growth trend that is not going to go away. The is reinforced by an economic advantage open access of publishing more along with increased interest in open data, micro-publications etc At the same time, the science information market is extending to new countries such as India, Brazil & China.
Discovery is largely through search engines, indexing services (Scopus, Web of Science), personal and online networking (conferences, mendeley) and so one. But these do not rank knowledge providing reputation, orientation, context, inspiration.
Current tools: journal impact factor but this is a blunt tool that doesn’t work at the individual paper level but is still perceived as important for academics – and for publishers as pricing correlates to impact factor. Article based tools such as usage and dissemination metrics are common.
There is an opportunity for open access to make access to published papers easier that may undermine publishing paywalls and encourage academics to look to open access channels. But open access publications are about 10% of total and on a lower growth trajectory. So there needs further incentives for academics to support open access publications.
Open Science is an open access communication platform with 1.5m open access articles, social networking and collaboration tools. The platform allows commenting, dissemination, reviewing or ‘liking’ an article. Will develop an approach to enable the ranking of individual articles that can be bundled with others, eg, by platform users, or by publishers [so there is a shift towards alternative and personalised forms of article aggregation that can be shared as collections?].

Question: impact factors can be gamed as can alternative metrics. What is key is the quality of the data used and analysis – metrics for how believable articles are?

We’re looking at how to note reproducibility of article findings but these aren’t always possible so edited collections based are a way forward.

Q: this issue of trust is not about people but should be about the data and analysis and the transparency of these – how the data came about?

So there is a need to rethink how methods sections are written. We’re also enhancing the transparency of the review process.

The final session on this section is Peter Burnhill, Director, EDINA on “Where data and journal content collide: what does it mean to ‘publish your data’?”. Looking at two case studies:
1. project on reference rot (link rot+content drift) to develop ways of archiving the web and capturing how sites/ urls have changed over time. Tracked the growth in web citations in academic articles and found 20%+ of urls are ‘rotten’ and original pages cited have disappeared including from open archives. A remedy is to use reference management software to snapshot and archive web pages at time of capture. The project has developed a Zotero plug-in to do this (see video here).
2. an ongoing project on url preservation by publishers. There are many smaller publishers that are ‘at risk’ of being lost. Considers data as working capital (that can be private as work-in-progress) or as something to be shared.
The idea of open data is not new to science and can be seen in comments on science from the 19th Century.
The web and archiving problematises the issues of fixity and malleability of data.
__________________________________________

We’re back following a brief coffee break.

Next up is Steve Wheeler on “The Future is Open: Education in the digital age”. Will be talking about ‘openness’ and what we do with the content and knowledge that we produce and have available. Publishing is about educating our community and so should be as open as possible and for freely accessible to better educate that community.
Pedagogy comes first and technology are the tools: we don’t want technological determinism. You have to have a purpose in subscribing to a tool – technology is not a silver bullet.
“Meet Student 2.0”: has been using digital tools at six months old onwards. Most of our students are younger than Google! and are immersed in the digital. But I don’t follow the digital natives idea but do see merit in the digital residents and visitor concept from White and Le Cornu.
Teachers fearing technology: 1 how to make it work; 2 how to avoid looking like an idiot; 3. they ‘ll know more then me. For learners the concerns are about access to WiFi and power. Uses the example of the floppy disk recognised as the save icon but not as a storage device.
Students in lectures with laptops as ‘windows on the world’ to check on and expand on what is being presented too them. But what do these windows do: find information, engage in conversations. Another example is asking about a text on Twitter leads to a response directly from the author of that text. UNESCO talks about communities of users (2002).
Openness is based on the premise of sharing and becomes more prominent as technology makes sharing possible at scale. mentions Martin Weller’s Battle for Open and how openness as an idea has ‘won’ but implementation still has a lot stil to do.
Community is key based on common interest rather than proximity – as communities of practice and of interest. Online, en masse reduces the scope for anonymity and drives towards open scholarship where the academic opens themselves up for constructive criticism. Everything can be collaborative if we want it to be.
Celebration, connection, collaboration and communication all goes into User Generated Content (UGC). Defines UGC as having *not* been through peer review but there is peer review through blog comments, Wikipedia, Twitter conversations. Notes Wikipedia as the largest human Rhizomatic structure in the world.
Moving on to CopyLeft and the Creative Commons. Rheingold on networking as a key literacy of the 21st Century in terms of amplifying your content and knowledge.
Communities of Learning and professional learning networks – with a nod to six degrees of separation but thinks it is down to two to three degrees as we can network to people much easier. Collaborative Open Networks where information is counted as knowledge if it is useful to the community. David Cormier (2007) on Rhizomatic knowledge that has no core or centre and the connections become more important than the knowledge. Knowledge comes out of the processes of working together. This can be contrasted with the closed nature of the LMS/ VLE and students will shift as much as possible to their personal learning environments.
Have to mention MOOCS ad the original cMOOCs were very much about opening content on a massive scale and led by students. The xMOOC has closed and boxed the concept and generating accusations of a shallow learning experience.
Open access publishing. Gives the example of two papers of his, one was in an open access journal that underwent open peer review. The original paper, the reviewer comments, the response and the final paper were published – open publishing at its best!But the other paper was to a closed journal and took three years to publish – the open journal took five months. The closed journal paper has 27 citations against 1023 for the open journal.
Open publishing amplifies your content, eg, the interactions generated through sharing content on SlideShare. His blog has about 100k readers a month and is another form of publication and all available under Creative Commons.
This is about adaptation to make our research and knowledge more available and more impactful.

Question: how are universities responding to openness.
It depends on the universities’ business model – cites the freemium model with a basic ‘product’ being available for free. In the example of FutureLearn is giving away partner content for free with either paid for certification or as a way of enhancing recruitment to mainstream courses.

Now time for lunch
______________________________________________________

Now back and looking at measuring impact with Euan Adie from altmetric
Using the idea of impact of research is about making a difference. Impact include quality: rigour, significance, original, replicable
attention: the right people see it
impact: makes a difference in terms of social, economic, cultural benefits.

REF impact is assessed on quality and impact. A ‘high impact journal’ assumes the journal is of quality and the right people see it (attention).

Impact is increasingly important in research funding across the world. And it is important to look at impact.

Traditional citations counts measure attention – scholars reading scholarship.

Altmetrics manifesto – acknowledge that research is available and used online then we can capture some measures of attention and impact (not quality). This tends to look at non-academic attention through blog posts and comments, Tweets, newspapers; and impact on policy-makers. But what this gives is data but a human has to interpret it and put it in to context via narrative.

Anna Clements on the university library at St Andrews University. What are the policy drivers for the focus on data: research assessments, open access requirements (HEFCE, RCUK) and research data management policies (EPSRC, 2015). Which required HE to focus on the quality of research data with a view to REF2020, asset exploitation, promotion and reputation and managing research income – as well as student demand/ expectations especially following the increase in fees. So libraries are taking lead in institutional data science within the context of financial constraints and ROI and working with academics.
Developing metrics jointly with other HEIs as snowball metrics involving UK, US and ANZ as well as publishers and the metrics are open and free to use.

Kaveh Bazargan from River Valley Technologies on “Letting go of 350 years’ legacy – painful but necessary”. The company specialises in typesetting heavy maths texts. But has more recently developed publishing platforms.

Context, personalisation and facilitation – new paper to be published

[Update: the paper was published in January and can be found here] In the New Year, a short paper by me is to be included in a special edition of TechTrends to be published in the New Year. The abstract is:

This article explores professional learning through online discussion events as sites of communities of learning. The rise of distributed work places and networked labour coincides with a privileging of individualised professional learning. Alongside this focus on the individual has been a growth in informal online learning communities and networks for professional learning and professional identity development. An example of these learning communities can be seen in the synchronous discussion events held on Twitter. This article examines a sample of these events where the interplay of personal learning and the collaborative components of professional learning and practice are seen, and discusses how facilitation is performed through a distributed assemblage of technologies and the collective of event participants. These Twitter-based events demonstrate competing forces of newer technologies and related practices of social and collaborative learning against a rhetoric of learner autonomy and control found in the advocacy of the personalisation of learning.

I’m looking forward to it coming out – along with other excellent papers from colleagues here.

Theorising Technology in Digital Education

These are my notes taken during the presentation and then tidied up a bit in terms of spelling and sense – so they may well be limit, partial and mistaken!

Welcome from Sian Bayne with the drama of the day “fire! Toilets!” and confirmed that the event is being livestreamed and the video is available here.
Lesley Gourlay as chair for the day also welcomed participants from across the UK and  Copenhagen. Seeking to provide a forum for a more theorised and critical perspective technology in higher education in the SRHE (Society for Research in Higher Education). Prof Richard Edwards at the School of Education gained funding for international speakers for today’s events. Unfortunately Richard is ill and can’t be here.

The theme of the event is developing the theoretical, ethical, political and social analysis of digital technologies and shift away from an instrumentalist perspective. The event Twitter hashtag is #shre

The first presentation is by Alex Juhasz on distributed online FemTechNet. FemTechNet as a network does not often speak to be field of education so this is a welcome opportunity (she has also blogged on the event here)

FemTechNet is an active network of scholars, technologist and artists interested in technology and feminism. The network is focused on both the history and future of women in technology sectors and practices. FemTechNet is structured through committees and has a deep process-focused approach to its work that is in important in terms of feminist practices. Projects involve the production of a white paper, teaching and teaching practices, workshops, open office hours, co-teaching, etc. models the interaction of theory and practice. But it has been difficult to engage students in collaborative projects while staff/ professors are much more engaged. Town halls are events for collaborative discussion events with an upcoming event on Gamergate to include a teach-in. FemTechNet have also produced a ‘rocking’ manifesto as “feminist academic hacktivism” and “cyberfeminist praxis”.
FemTechNet values are made manifest in Distributed Open Collaborative Courses (DOCCs) themes on Dialogues on Feminism and Technology (2013) and Collaborations in Feminism and Technology (2014). DOCCs against the xMOOC model to promote a participative approach to course design and distributed approaches to collaboration. DOCC was labelled as the Feminist anti-MOOC based on deep feminist principles including wikistorming, and has much press and other interest, some positive and some ‘silly’ (Fox News). FemTechNet has lots of notes on using tools and teaching approaches that can be used across lots of different critical topics beyond feminism alone.
DOCCs are designed to be distributed and with a flatter hierarchy with less of a focus on massiveness. Using technology in an open way to co-create knowledge beyond transmission. More details on the DOCC as a learning commons vs course can be found here.
The FemTechNet commons is now housed and redesigned at the University of Michigan although this may be a way of universities avoiding Title 9 violations. But as a result, the newer commons has become less open and collaborative as an online space.
Much of FemTechNet work involved overcoming technological hurdles and was based on the unpaid work of members. FemTechNet engage with critique of lobour practices and contexts in higher education.
The DOCC networks involve a wide scope of different types of universities from Ivey League and community colleges and community organisations collaborately working.
Student numbers are fairly small with approx 200 students but very high completion rates and very positive feedback and evaluations. Between 2013-4 there was not really growth of scale partly due to limitations of infrastructure. Now with the support of University of Michigan, there is an increased aspiration to develop international collaborative work.
DOCCs involve networking courses from many different fields of study involving both on-campus to fully online courses. Basic components of courses are keynote dialogue videos, smaller keywords dialogues and five shared learning activities. See also the situated knowledge map [link]. There is a big emphasis on share resources, cross-displinarity and inter-institutional working and learning.
So while DOCCs emerged from a feminist network, the tools, models and approaches can be used in many subject areas.

After lunch

Ben Williamson is prsenting on Calculating Academics: theorising the algorithmic organisationan of the digital university. The open slide isof a conceptualisation of a digital university university that can react to data and information that it receives. Ben will be prsenting on a shift t understanding of the university as mediated by the digital and focus on the role of algorithms.
One of the major terms being used is in terms of the smart university based on big data to enhance teaching, engagement, research, enterprise to optimise and utilise the data universities generate. This turn is situation in the wider concept of ‘smart cities’.
Smart cities are ‘fabricated spaces’ that are imaginary and unrealised and perhaps unrealisable. Fabricated spaces serve as models to aspire to realise.
Smart universities are fabricated through
technical devicecs, softre, code,
social actors including software producers, government and
discourses of text and materials.
Algorithm is seen in compsci as a set of processes to produce a desired output. But algorithms are black boxed hidden in IP and impenetrable code. It is also hidden in wider heterogeneous systems involving languages, regulation and law, standards etc.
Also algorithms emerge and change overtime and are, to an extent, out of comtrol, and are complex and emergent.
Socio-algorithmic relationality as algorithms co-constitute social practice (Bucher 2012); generate patterns, order and coordination (mackenzie 2006) and are social products of specific political, social and cultureal contexts and go on to produce by temselves.
Involve translation of human action through mathematical logics (Neyland 2014). Gillespie (2014) argues for a sociological analysis of algorithms as social, poitical as well as technical accomplishments.
Algorithms offer (Gillespie 2014): technical solutions; as synedoche – an abbreviation for a much wider socio-technical system; as stand-in for something else around corporate ownership for example; commitment to procedure as they privilige qualitification and proceduralisation.
Big data and the smart university is a problem area in this idea of the smart university. Is there a different epistemology for big data. Big data cannot exist without algorithms and has generated a number of discourses. Wired mag has suggested that big data is leading to the end of theory as there is no need to create a hypothesis as big data will locate patterns and results and this is a challenge to traditional academic practice. Also there is the rise of commercial social science such as the Facebook social science team often linked to nudging behaviours and “engineering the public” (Tufecki 2014). This is replicated in policy development such as the centre for analysis of social media at Demos using new big data sets. We’re also seeing new academic initiatives such as social physics at MIT and building a predictive model of human behaviour. Also see MIT laboratory for social machines in partnership with Twitter.
This raises the question of what expertise is being harnessed for smarter universities. Points ot the rise of alternative centres of expertise that can conduct big data analysis that are labelled as algorithmist Mayer0Schonberger and Cukier. Such skills and interdisciplinarity does not fit well in university. Sees the rise of non-sociologist sociologists doing better social research?
Mayer0Schonberger and Cukier Learning with Big data – predictive learning analytics, new learning platforms, et.\c. that is reflected in the discourses on the smarter university. Bid data generates the university in immediate and real time- doesn’t have to wait for assessment returns. See for example, IBM education for a smarter planet focused on smarter and prescriptive analytics based on big data.
Knewton talks of inferred student data that suggests the algorithm is objective and consistent. But as Seaver (2014) points out, these algorithms are created and changed through ‘human hands’.
So we’re seeing a big data epistemology that uses statistics that explain and predict human behaviour (Kitchin 2014): algorithms can find patterns where science cannot that you don’t need subject knowledge to understand the data. But he goes on that this is based on fallacies of big data- big data is partial, based on samples, what analysis is selected, what data is or can be captured. Jurgenson (2014) also argues for the understanding of the wider socio-economic networks that create the algorithms – the capture of data points is governed by political choices.
How assumptions of bid=g data are influenceing academic research practices. Increasingly algor entwinned in knowledge production when working with data – sch as Nvivo, SPSS, google scholar – Beer 2012 – algorthimic creation of social knowledge.Also seeing the emergence of digital social research around big data and social media. eg social software studies initiative – soc sci increasingy dep on digital infrrastructure not of our making.
Noortje Marres rethink social research as distributed and share accomplishment involving human and non-human.
In turn influences academic self-assessment and identity through snowball metrics on citation scores, researchfish etc. translating academic work in to metrics. See Eysenback (2011) study linking Tweets and rates of citation. So academics are subject to increasing quantified control mediated through software and algorithms. Seeing the emergence of the quantified academi self. Yet academics are socialised and by these social media networks that exacerbtes this e-surviellance (Lupton 2014). While share research develops its own lib=vely social life outside of the originator’s control.
Hall (2013) points to new epistemic environment that academics are being more social (media) entrepreneurial. Lyotard (1979) points to the importance and constraints of computerisation of research.
Finish with Q
– how do cog based classrooms learn?
–  what data is collected to teach?
– should academics learn to code?

A lot of discssion on the last question. It was also pointed out that its not asked should coders learn to be sociologists?
Also pointed out that people demonstrate the importanc of embodoed experiences through protests, demonstrations, that reflects what is loss in the turn to data.

After a short break, we now have Norm Friesen on Education Technology or Education as always-already Technological”. Talking about educational technology as not new but as going through a series of entwinements over time. Norm will look at older technologies of the text book and the lecture as looking back at older recognisable forms.
Looking back we can argue that educational technologies now are not presenting particularly novel problems for higher education. Rather higher education has always been constituative with educational practices then we can see how practices can adapt to newer technologies now.
Tech in education have always been about inscription, symbols as well as performance. If we understand the university as a discourse networks – see Kipler’s discourse network in analysis of publishing in 19 Century. Institutions like universities are closely linked to technology in storing and using technologies and modifying technologies for their practices.
In the example of tablets going back to ancient times or the horn book or other forms that is tigtly couple with institutions of learning and education. Such as clay tablets dating back to 2500 – 2000 BCE that show student work and teacher corrects as symbolic inscriptions of teaching and learning practices. And such tablets work at the scale of individual student work or as larger epic literatures. Can see a continued institution symbolic practices through to the iPad. Here technologies may include epistemic technologies such as knowledge of multiplication tables, procedures of a lecture – technologies as a means ot an end – so technologies are ‘cultural techniques’.

For the rest of the presentation will focus on the textbook and lecture as technologies that are particularly under attack in the revisioning of the university. Ideas of the fipped classroom still priviliges the lecture through video capture. Similarly the text book has yet to be overtaken by the e-textbook. Both provide continuities fromover 800 years of practice and performance.
The lecture goes back to the earliest university as originally to recite a text, so for transmission rather than generation of knowledge with a focus on the retention of knowledge. Developing ones own ideas in a lecture was unknown and student work involved extensive note taking from oral teaching (see Blair 2008). The lecture is about textual reproduction. Even following the printing press, this lecture practice continued although slowly, the lecturers own commentary on the text was introduced manifested as interlines between lines written from the dictated text. Educational practice tended to  not change as rapidly as the technologies of printing such that education was about 100 years behind.
But in 1800  saw the first lectures only from the lecturers own notes. so the lecture was recast around the individual as the creator of knowledge. So the individual lecturer and student not the official text became the authoritative sources of knowledge. Also the notion of the performance becomes increasingly important in the procedure of the lecture.
In textbooks we see pedagogical practice embedded in the text as end of chapter questions for the student to reflect and respond to (the Pestalozzian method, 1863). This approach can be seen in Vygotsky, Mead and self-regulated learning.
Specific technological configurations supported the increased emphasis on performance such as podcasting, powerPoint, projectors, etc. (see TED talks).
In the text book, similar innovations are happening in term sof layout, multimedia, personalised questioning (using algorithms). The text book becomes an interactional experience but continue from much older forms of the textbook. What is central is what are the familiar forms – that underlying structures have persisted.

But it is also the case that lectures nolonger espouse their own theories, they do not create new knowedge in the lecture.

Making & Breaking Rules in IT Rich Environments

These are my notes taken during the presentation and then tidied up a bit in terms of spelling and sense – so they may well be limit, partial and mistaken!

Prof Kalle Lyytinen, Case Western Reserve University.

The welcome came from Robin Williams noting that Kalle has a wide range of  appointments and  research interests and often acts as abridge builder across different subject disciplines and between American and European research communities. Kalle has been particularly supportive around research in IT infrastructures and in supporting the development of research communities on IT infrastructure.

Kalle starts the presentation with a discussion of the background of this paper that has been developing over the last five years. His research is positioned within science and technology studies (STS) but with a more behaviourist focus. This paper investigates issues of regulation which is fundamental to social interactions through establishing what is and is not acceptable behaviour within a specific context.

The example of the Securite Generale fraud by Jerome Kerviel who fooled the control systems to undertake fraudulent trading resulting in losses for the bank of approximately €5bn. This fraud was contrasted the old fashioned approaches to bank robbery and the regulatory regimes aimed at preventing such robberies to highlight that digital banking require new and different regulatory regimes.

IT systems embed rules that have regulatory functions on access to and the use of resources. Yet a key concern remains with how social actors comply with and work around these rules. So this research is concerned with how IT can be seen as materially based organisational regulation in interaction with the social.

What is a rule? Rules tend to be defined as a purely social statement on the expectations on behaviours by participants in a system and it is assumed that such rules are generally reciprocal. The expectations should create stabilities of behaviour yet are not mechanistic and so variances occur through misunderstanding, reinterpretation and resistance. For organisations, what is key is the materiality of rules through systems, processes, expressions in space design and so forth, that also generate stability over space and time. Regulation combines social and material components intertwined in a practice that decrease variance in behaviours and also facilitate the coordination of collective action.

Regulation is a meeting point of tensions between structure and agency raising questions on, for example, centralisation vs decentralisation of decision-making.

An IT system is a dynamic and expansive resource through which regulatory power is exercised by materialisation of rules. Rules are stored, diffused, enforced through IT. Rules encode and embed rules (Latour 1996, 2005) while rules become more complex through IT systems that allow complex combinations of rules. IT can track, record and identify events on a large scale and high speed and low cost – which is where big data can help identify and enforce new rules. Through IT, regulation becomes less visible as it is embedded in, for example, user-interfaces.

The example of high frequency trading and how IT rules are established that limit what types of trades can be operationalised – see Lewis’ Flashboys book.

Regulation has three dimensions: 1. the Rules that are materialised as a 2. IT artefact that is interdependent on 3. practices. Rules are coupled overtime with practices (such that the rule may be forgotten as it is embedded in the IT artefact.

IT regulation research in 1970s to 90s viewed regulation as oppressive and deterministic and in 1990s+ research was more concerned with deviation in practice. Alot of research in regulation positioned IT as a contextual variable while a much smaller number looked specifically at the IT in terms of materialisation, enactment of rules in practices and in the temporal aspects (Leonardi 2011). So research on IT and Regulation is limited.

Research to focus on sources of co-existence of multple IT based regulations generating heterogeneous and conflicting regulations so has multiple consequences.

Our focus is on practices of maintaining and transforming rules that mediate collective activity. Regulations are based on three types of origins: (i) autonomy where people agree on behaviours; (ii) control-orientated, explicit rules and laws based; or (iii) joint. The research is interested in practices in IT rich environments as rules become more invisible as they are ‘inscripted’ in to technology and/ or material. The same rule can be embedded in different ways, eg, speeding rules embedded in speed bumps and/ or in vocal warning from speedometer.

The study was a 7 year longitudinal study of regulatory episodes in a virtual learning environments. How teaching and learning behaviours are regulated through the VLE. Data was gathered from email logs, interviews and document analysis. The analysis focused on critical incidents, simple statistics and lexical analysis of emails.

The research questions were: 1. what is the focus of the regulatory episodes and 2. what was the temporal coupling between regulation and behaviour. The VLE provides a rich environment with alternative forms of regulation, dynamic in terms of wider changes in higher education, rules embedded in the application and how it is used.

Five types of regulatory episodes, all of which changed over time:

1. functional – restrictions on how users use the VLE based on the functionality of the VL

2. Tool orientated – specific tools are imposed for specific activities

3. Role orientated – which roles can use which aspects of the VLE

4. Procedure orientated – where learning processes such as course activities are practiced in new ways

5. Opportunity orientated.

Material regulation is dominant in functional and tool orientated rules while the social was dominant in role and procedure orientated rules.

The complexity of the multiplicity of rules and sources of rules led to confusion and difficulties in enforcing rules but, with low levels of constraint, were also sources of innovation in practices. Also, increasing the formal limits of the IT systems generated conflict over the rules.

As the operationalisation of the VLE continued over time so the complexity and volume of rules increased.

Over time the central administration of the university asserted increased control over the VLE for purposes of efficiency and uniformity of provision but also to legitimise its existence. But this increased control also removed a lot of local innovations. The materialisation of the rules in the VLE enabled greater centralised control. But also that IT choices then limits what future flexibility may be possible.

 

 

weeknotes [20102014]

Over the last few weeks, I’ve been

further working through my research involving discourse analysis along with network and other sociomaterial methods for my PhD. I think I’m developing a stronger understanding of of the method “in action” and Technology Enhanced Learning.

I’m also continuing to enjoy the teaching on two courses: Digital Environments for Learning; and Course Design for Digital Environments.

I’m also continuing to contribute to the development of two initiatives which I’ll hopefully write about sometime soon.