I’m currently working on an open content course – the learner proposes the learning activities, the evidence they will gather and how they will demonstrate that they have met the agreed learning outcomes. It is pretty interesting stuff and opens up huge opportunities for experimenting on learning and education. To help in keeping students on track in the course, we are looking at developing a couple of sets of process-based digital badges and this is an early sketch of the possible structure of the badges.
Paul Campbell from Scottish Water and I have a new article published: Paul Campbell , Peter Evans , (2016) “Reciprocal benefits, legacy and risk: Applying Ellinger and Bostrom’s model of line manager role identity as facilitators of learning”, European Journal of Training and Development, Vol. 40 Iss: 2, pp.74 – 89. http://dx.doi.org/10.1108/EJTD-01-2015-0007
The abstract of the paper is as follows:
– The purpose of this paper is to explore the beliefs held by managers about their roles as facilitators of learning with their employees in a public utilities organisation.
– The research was based on Ellinger and Bostrom’s (2002) study on managers’ beliefs on their role as facilitators of learning in learning-orientated firms. Abductive research logic was used in a small sample in depth qualitative study using critical incident interviews.
– Managers in the study conveyed strong self-efficacy and outcome beliefs confirming the central role in workplace learning of line managers who adopt a coaching approach. Key new insights were also found in managers’ beliefs on acting as role models within the organisation and their beliefs on the need to manage skills-related organisational risk.
– A key limitation of the research is inherent in the use of critical incident technique, as it provides information on the nature of “atypical events” as opposed to more gradual, tacit and typically ongoing learning at work.
– The managers’ belief map derived from the data provides a context-specific “target of change” with which to challenge the wider organisation regarding learning facilitation. The research also shows how industry-specific contexts may provide specific pathways for developing managers in their role as facilitators of learning.
– The value of the research is twofold: first, providing further validation of the findings from Ellinger and Bostrom’s (2002) research on managers’ beliefs on the effective facilitation of workplace learning; second, additional insights on managerial beliefs regarding role modelling and succession planning are identified, and the implications for management development are discussed.
I’m attending the IT Futures conference at Edinburgh today. These notes are not intended to be a comprehensive record of the conference but to highlight points of interest to me and so will be subjective and partial.
A full recoding of the conference will be available at the IT Futures website
The conference opens with an address from the Principal, Sir Timothy O’Shea with an opening perspective:
Points to the strengths of the University in computing research, super-computing and so on, and ‘ludicrously’ strong in e-learning with 60 plus online postgraduate programmes. In these areas, our main competitors are in the US rather than the UK.
Beginning with a history of computing from the 19402 onwards. Points to Smallwood and using computers to self-improving teaching and Papert on computing/ e-learning for self-expression. 1980s/90s digital education was dominated by the OU. 1990s the rise of online collaborative learning was an unexpected development that addressed the criticisms that e-learning (computer assisted learning) lacked interactive/ personalisation elements.
Argues that the expansion of digital education has been pushed by technological change rather than pedagogical innovation. We still refer to the constructivism of Vygotsky while technology innovation has been massive.
How big is a MOOC?
– 100 MOOCs is about the equivalent in study hours of a BA Hons. A MOOC is made up of a 1000 minnows (I think this means small units of learning. MOOCs are good for access as tasters and to test e-learning propositions. They also contribute to the development of other learning initiatives, enhance the institutional reputations including relevance through ‘real-time MOOCs’ such as on the Scottish referendum. MOOCs provide a resource for learning analytics.
So e-learning is mature, not new, and blended learning is ‘the new normal’ and dominated by the leading university brands of MIT, Stanford, etc. A huge contribution of e-learning is access.
A research agenda: to include modelling individual learning, including predictive learning support; speed of feedback; effective visualisation; supporting collaboration; understanding Natural Language; location of the hybrid boundary (eg, in practical tests); personal programming (coding) and how realistic is it for meaningful coding skills for the non-geeks to be developed.
Open questions are around data integrity and ownership; issues of digital curation; integration of data sources; who owns the analysis; should all researchers be programmers?; and how to implement the concept of the learner as researcher?
Question about artificial intelligence: Answer – Tim O’Shea’s initial research interest was in developing programmes that would teach intelligently – self-improving teachers – but using AI was too difficult and switched towards MIT’s focus on self-expression and for programmers to understand what their codes were doing. Still thinks the AI route is too difficult to apply to educational systems.
Q: surprised by an absence of gaming for learning?
A: clearly they can and cites Stanford on influence of games on learning motivation
Q: on academic credit and MOOCs
A: Thinks this is inevitable and points to Arizona State University which is attempting to develop a full degree through MOOCs. Can see inclusion of MOOCs in particular postgraduate programmes – heuristic of about a third of a Masters delivered via (external) MOOCs but more likely to be taken forward by more vocational universities in the UK – but using MIT or Stanford MOOCs replacing staff!.
Now moving on to Susan Halford on ‘Knowing Social Worlds in the Digital Revolution’:
Researches organisational change and work and digital innovation. Has not directly researched changes in academic work but has experienced them through digital innovation. Digital innovation has kick-started a revolution in research through data volume, tracking, analyse and visualise all sorts of data. So data becomes no longer used to research something but is the object of social research.
Digital traces may tell us lots about how people live, live together, politics, attitudes, etc. Data capturing social activities in real time and over time rather than replying on reporting of activities in interviews, surveys and so. At least, that is the promise and there are a set of challenges to be addressed to realise the potential of these data (also see this paper from Prof Halford).
Three key challenges: definition; methods and interdisciplinarity
Definition– what are these digital data: these are not naturally occurring and do not provide a telescope to social reality. Digital data is generated through mediation by technology and so is not naturally occurring. In the case of Twitter, a huge amount of data, but is mediated by technological infrastructure that packages the data. The world is, therefore, presented according to the categories of the software – interesting but not naturally-occurring data. Also, social media generate particular behaviours and are not simply mirrors of independent social behaviour – gives the example of the ReTweet.
Also, there is the issue of prominence and ownership of data. Survey data often is transparent in the methods used to generate data and therefore, the limits of the claims that can be made from the data. But social media data is not transparent in how it is generated – the data is privately owned where data categories and data stream construction is not transparent. We know that there is a difference between official and unofficial data. We do not know what Twitter is doing with its data but that it is part of an emerging data economy. So this data is not neutral and is the product of a series of technological and social decision-making that shapes the data. We need to understand the socio-technical infrastructure that created them.
Method – the idea that in big data, the numbers speak for themselves is wrong: numbers are interpreted. The methods we have are not good for analysis of large data. Research tends towards small scale content analysis or large scale social network analysis but neither are particularly effective at understanding the emergence of the social over time – to harness the dynamic nature of the data. A lot of big data research on Twitter is limited to mathematical structures and data mining (and is a-theoretical) but is weak on the social aspects of social media data.
Built a tool and Southampton to dynamically map data flows through ReTweeting.
Interdisciplinariety: but is a challenge to operationalise inter-disciplinarity.
Disciplines imagine their object of study in (very) different ways and with different forms of cultural capital (what is the knowledge that counts – ontological and epistemological differences). So the development of interdisciplinarity involves changes on both sides – researchers need to understand programming and computer scientists need to understand social theory. But also need to recognise that some areas cannot be reconciled.
Interdisciplinarity leads to questions of power-relations in academia that need to be addressed and challenged for inter-disciplinarity to work.
But this work is exciting and promising as a field in formation. But also rises for responsibilities: ethical responsibilities involved in representing social groups and societies and data analytics; recognising digital data excludes those who are not digitally connected; data alone is inadequate as social change involves politics and power.
Now Sian Bayne is responding to Prof Halford’s talk: welcomes the socio technical perspective taken and points to a recent paper: “The moral character of cryptographic work” as generating interest across technical and social scientists.
Welcomes the emphasis of interdisciplinarity while recognising the dangers of disciplinary imperialism.
What actions can be taken to support interdisciplinarity?
A: share resources and shared commitments are important. Also academic structures are important and refers to the REF structures against people submitting against multiple subjects. (but is is pointed out that joint submissions are possible).
Time for a break ….
We’re back with Bernard Schafer of the School of Law talking on the legal issues of automated databases. Partly this is drawn from a PG course on the legal issues of robotics.
The main reference on the regulation of robots is Terminator but this is less worrying than Short Circuit, eg, when the robot reads a book, does it create a copy of it, does the licence allow the mining of the data of the book, etc. See the Qentis hoax. UK is the only country to recognise copyright ownership of automatically generated works/ outputs but this can be problematic for research, can we use this data for research?
If information wants freedom, does current copyright and legal frameworks support and enable research, teaching, innovation, etc? Similar issues arose form the industrial revolution.
Robotics replacing labour – initially labour but now examples of the use of robots in teaching at all levels.
But can we automate the dull part of academic jobs. But this creates some interesting legal questions, ie, in Germany giving a mark is an administrative act similar to a police caution and is subject to judicial review, can a robot undertake an administrative act in this way?
Lots of interesting examples of automatic education and teaching digital services:
Good question for copyright law is what does ‘creativity’ mean in a world share with automatons? For example, when does a computer shift from thinking to expressing an idea which is fundamental to copyright law?
Final key question is: “Is our legal system ready for automated generation and re-use of research?”
Now its Peter Murray-Rust on academic publishing and demonstrating text or content mining of chemistry texts.
…And that’s me for the day as I’m being dragged off to other commitments.
For all the structuring effects of the Twitter functional features, the Twitter experience is generally perceived as a private one as only the individual user can see their Twitter feed, as they have structured it, on their particular screen configuration (Gillen and Merchant 2013). This aspect of the individualisation and heterogeneity of public and open textual communication adds to the complexities of interpreting, analysing and making sense of Twitter. Gillen and Merchant’s (2013) discussion of the capacity of Twitter users to organise the flow of discourses they are presented seems to ignore both the algorithmic impositions of, for example, Trending terms in that interface as well as the effects of the content of individual Tweets being perceived as a coherent informational flow or a chaotic mess of impressions (or both). The Twitter user experience is not an isolated or individualised one but is, rather, an entanglement of heterogeneous intentions, business logics, coded protocols, algorithmic outputs, collective norms and individual perceptions.
It is this entanglement between the human and material that opens, closes and patterns or orders the particular uses of Twitter. Twitter is constantly and actively made and remade in the intra-actions of user behaviours, hardware, coding, algorithms and visual design, rather than Twitter being a neutral utility or passive instrument.
I’ve recently started using Chris Winfield‘s technique of chunking tasks to 40 pomodoros per week which he describes here. I’m essentially using this technique for “maker” time – as described in this post from Paul Graham. I’ve found this technique works really well for writing (one I know what I’m going to write) as described as writing sprints here.
It may be the case that the “quieter summer” has made this easier and once the new academic year starts, I’ll find it harder to maintain this, but so far, I’ve found it impressively productive and not too tough to keep to.
This Tweet caught my eye today by triggering a train of thoughts on what a ‘distributed curriculum’ might involve.
This idea appears to position the curriculum as an outcome of interacting within networks of people, resources and technologies. I wonder if this curriculum is a restating for a formal education context, of the sort of personalised learning I previously discussed here. One of the issues here is on curricula design and whether all students have the capabilities, capacities and capital to direct the generation of their own curriculum in a coherent and sustainable manner or whether ‘fluid curricula’ models will need and be required to be fairly striated or ‘channeled’. Similarly, there is a need to develop successful practices on supporting students and staff in approaches to self-directed and self-regulated learning enabling deep engagement with ‘wicked’ subject problems.
Another aspect to the distributed curriculum may well be a social aspect of both participating in external professional and other communities as well as generating ephemeral communities of learners that ‘swarm’ around specific learning objects and artefacts as well as collectively bringing these objects/ artefacts in to engagement with the subject problem of interest.
It has been a more hectic couple of weeks in some ways with
more exam boards as its that time of year
continuing planning course staffing for next year so my head was buried in spread sheets for a while
researching literature on communities on Twitter and how might the affective aspects of communities distinguish them from networks
meetings, lots or meetings …
assessing applications to be part of an exciting new initiative to launch in the New Year
reading up on Open Badges for possible inclusion in a new course launching in January 2016
attending the ReCon conference on open data and open publishing at Edinburgh University Business School . My notes on the conference can be found here
supervising a number of super dissertations which is great!
So here we are at ReCon, Research in the 21st Century: Data, Analytics and Impact at the University of Edinburgh’s Business School. I’ll be taking notes here throughout the day but these will be partial and picking up main points of interest to me.
The conference is opening with Jo Young from the Scientific Editing Co giving the welcome and introduction to the event.
The first session is from Scott Edmunds from GigaScience on “Beyond Paper”. Has the 350 year old practices of academic publishing had its day and is the advertising of scholarship & formulated around academic clickbait. Taken to extremes, we can see the use of bribery around impact factors, writing papers to order, guaranteed publications etc. This has led to an increase in retractions (x15 in the last decade) so that by 2045 as many paper will be retracted as published and then we’re into negative publishing.
We need to think of new systems of incentives and we now have the infrastructure to do this especially data publishing such as Giga Science provide.
Giga Science has own data publishing repository as well as an open access journal with open and transparent review process. Open data and data publishing is not new and was how Darwin worked through depositing collections in museums and publishing descriptions of finds before the analysis that led to Origin of the Species.
Open data has a moral imperative regarding data on natural disasters, disease outbreaks and so forth. Releasing data leads to sharing of data and analysis of that data for examples on Ecoli Genome analysis. Traditional academic outputs were created but it is also used as an example of the impact of open data. See the Royal Society report here. The crowd sourced approach to genome sequencing is being used in, eg, Ebola, in rice genomes addressing the global food crisis. But publishing of analysis remains slow and needs to be closer to realtime publishing.
So we’re now interesting in executable data looking at the research cycle of interacting data and analysis leading to publications at micro and nano publications that retain DOIs. Alot of this is collected on GitHub.
Also looking at the sharing of workflows using the Galaxy system and again, giving DOIs to particular workflows (see GigaGalaxy), sharing virtual machines (via Amazon).
Through analysis of published papers found how rates of errors but also that replication was very costly.
So the call is “death to the publication, long live the research object” to rewards replication rather than scholarly advertising.
Question: how is the quality of the data assured?
Journal publications are peered reviewed and do checks using own data scientists. While open data is not checked. Tools are available and being developed that will help improve this.
Now on to Arfon Smith from GitHub on Predicting the future of Publishing. Looking at open source software communities for ideas that could inform academic publishing. GitHub is a solution to the issues of version control for collaboration using Git technology. People use GitHub for different things: from single files, through to massive software projects involving 7m + lines of codes. There are about 24m projects on GitHub and is often used by academics.
Will be talking about the publication of software and data rather than papers. Assumptions for the talk are: 1. open is the new normal; 2. the PDF is increasingly unsatisfactory way of sharing research; and 3. we are unprepared to share data and software in useful ways.
GitHub especially being used in data intensive sciences. There is the argument that we are moving in to a new paradigm of sciences beyond computational data in to data intensive sciences (data abundance) & Big Science.
Big Science requires new tools, ways of working and ways of publishing research. But as we become more data intensive, reproducibility declines under traditional publishing. In the biosciences, many methods are black boxed and so it is difficult to really understand the findings – which is not good!
To help, GitHub have a guide on how to cite code by giving a GitHub repository a DOI (via Zenodo) for academics.
From open source practices that are most applicable are:
1. rapid verification, eg, through verification of pull-requests where the community and 3rd party providers undertaking testing or using metrics that check the quality of the code, eg, Code Climate. So verification can and should be automated and open source is “reproducible by necessity”. So in academia we can see the rise of benchmarking services – see for example, Recast or benchmarking algorithm performance.
2. innovation in where there are data challenges by drawing on a culture of reuse around data products to filter out noise in research to enable focus on the specific phenomena of interest (by elimination by data from other analysis)
3. Normal citations are not sufficient for software. Academic environments do not reward tool builders. So there is an idea of distributing credit to authors, tools, data and previous papers. So makes the credit tree transparent and comprehensive.
These innovations depend on the forming of communities around challenges and/ or where open data is available.
The open software community have developed a number of solutions for the challenges faced in academic publishing.
Now we’ve moved on to Stephanie Dawson, CEO, ScienceOpen on “The Big Picture: Open Access content aggregators as drivers of impact” – which is framed in terms of information overload which is a growth trend that is not going to go away. The is reinforced by an economic advantage open access of publishing more along with increased interest in open data, micro-publications etc At the same time, the science information market is extending to new countries such as India, Brazil & China.
Discovery is largely through search engines, indexing services (Scopus, Web of Science), personal and online networking (conferences, mendeley) and so one. But these do not rank knowledge providing reputation, orientation, context, inspiration.
Current tools: journal impact factor but this is a blunt tool that doesn’t work at the individual paper level but is still perceived as important for academics – and for publishers as pricing correlates to impact factor. Article based tools such as usage and dissemination metrics are common.
There is an opportunity for open access to make access to published papers easier that may undermine publishing paywalls and encourage academics to look to open access channels. But open access publications are about 10% of total and on a lower growth trajectory. So there needs further incentives for academics to support open access publications.
Open Science is an open access communication platform with 1.5m open access articles, social networking and collaboration tools. The platform allows commenting, dissemination, reviewing or ‘liking’ an article. Will develop an approach to enable the ranking of individual articles that can be bundled with others, eg, by platform users, or by publishers [so there is a shift towards alternative and personalised forms of article aggregation that can be shared as collections?].
Question: impact factors can be gamed as can alternative metrics. What is key is the quality of the data used and analysis – metrics for how believable articles are?
We’re looking at how to note reproducibility of article findings but these aren’t always possible so edited collections based are a way forward.
Q: this issue of trust is not about people but should be about the data and analysis and the transparency of these – how the data came about?
So there is a need to rethink how methods sections are written. We’re also enhancing the transparency of the review process.
The final session on this section is Peter Burnhill, Director, EDINA on “Where data and journal content collide: what does it mean to ‘publish your data’?”. Looking at two case studies:
1. project on reference rot (link rot+content drift) to develop ways of archiving the web and capturing how sites/ urls have changed over time. Tracked the growth in web citations in academic articles and found 20%+ of urls are ‘rotten’ and original pages cited have disappeared including from open archives. A remedy is to use reference management software to snapshot and archive web pages at time of capture. The project has developed a Zotero plug-in to do this (see video here).
2. an ongoing project on url preservation by publishers. There are many smaller publishers that are ‘at risk’ of being lost. Considers data as working capital (that can be private as work-in-progress) or as something to be shared.
The idea of open data is not new to science and can be seen in comments on science from the 19th Century.
The web and archiving problematises the issues of fixity and malleability of data.
We’re back following a brief coffee break.
Next up is Steve Wheeler on “The Future is Open: Education in the digital age”. Will be talking about ‘openness’ and what we do with the content and knowledge that we produce and have available. Publishing is about educating our community and so should be as open as possible and for freely accessible to better educate that community.
Pedagogy comes first and technology are the tools: we don’t want technological determinism. You have to have a purpose in subscribing to a tool – technology is not a silver bullet.
“Meet Student 2.0”: has been using digital tools at six months old onwards. Most of our students are younger than Google! and are immersed in the digital. But I don’t follow the digital natives idea but do see merit in the digital residents and visitor concept from White and Le Cornu.
Teachers fearing technology: 1 how to make it work; 2 how to avoid looking like an idiot; 3. they ‘ll know more then me. For learners the concerns are about access to WiFi and power. Uses the example of the floppy disk recognised as the save icon but not as a storage device.
Students in lectures with laptops as ‘windows on the world’ to check on and expand on what is being presented too them. But what do these windows do: find information, engage in conversations. Another example is asking about a text on Twitter leads to a response directly from the author of that text. UNESCO talks about communities of users (2002).
Openness is based on the premise of sharing and becomes more prominent as technology makes sharing possible at scale. mentions Martin Weller’s Battle for Open and how openness as an idea has ‘won’ but implementation still has a lot stil to do.
Community is key based on common interest rather than proximity – as communities of practice and of interest. Online, en masse reduces the scope for anonymity and drives towards open scholarship where the academic opens themselves up for constructive criticism. Everything can be collaborative if we want it to be.
Celebration, connection, collaboration and communication all goes into User Generated Content (UGC). Defines UGC as having *not* been through peer review but there is peer review through blog comments, Wikipedia, Twitter conversations. Notes Wikipedia as the largest human Rhizomatic structure in the world.
Moving on to CopyLeft and the Creative Commons. Rheingold on networking as a key literacy of the 21st Century in terms of amplifying your content and knowledge.
Communities of Learning and professional learning networks – with a nod to six degrees of separation but thinks it is down to two to three degrees as we can network to people much easier. Collaborative Open Networks where information is counted as knowledge if it is useful to the community. David Cormier (2007) on Rhizomatic knowledge that has no core or centre and the connections become more important than the knowledge. Knowledge comes out of the processes of working together. This can be contrasted with the closed nature of the LMS/ VLE and students will shift as much as possible to their personal learning environments.
Have to mention MOOCS ad the original cMOOCs were very much about opening content on a massive scale and led by students. The xMOOC has closed and boxed the concept and generating accusations of a shallow learning experience.
Open access publishing. Gives the example of two papers of his, one was in an open access journal that underwent open peer review. The original paper, the reviewer comments, the response and the final paper were published – open publishing at its best!But the other paper was to a closed journal and took three years to publish – the open journal took five months. The closed journal paper has 27 citations against 1023 for the open journal.
Open publishing amplifies your content, eg, the interactions generated through sharing content on SlideShare. His blog has about 100k readers a month and is another form of publication and all available under Creative Commons.
This is about adaptation to make our research and knowledge more available and more impactful.
Question: how are universities responding to openness.
It depends on the universities’ business model – cites the freemium model with a basic ‘product’ being available for free. In the example of FutureLearn is giving away partner content for free with either paid for certification or as a way of enhancing recruitment to mainstream courses.
Now time for lunch
Now back and looking at measuring impact with Euan Adie from altmetric
Using the idea of impact of research is about making a difference. Impact include quality: rigour, significance, original, replicable
attention: the right people see it
impact: makes a difference in terms of social, economic, cultural benefits.
REF impact is assessed on quality and impact. A ‘high impact journal’ assumes the journal is of quality and the right people see it (attention).
Impact is increasingly important in research funding across the world. And it is important to look at impact.
Traditional citations counts measure attention – scholars reading scholarship.
Altmetrics manifesto – acknowledge that research is available and used online then we can capture some measures of attention and impact (not quality). This tends to look at non-academic attention through blog posts and comments, Tweets, newspapers; and impact on policy-makers. But what this gives is data but a human has to interpret it and put it in to context via narrative.
Anna Clements on the university library at St Andrews University. What are the policy drivers for the focus on data: research assessments, open access requirements (HEFCE, RCUK) and research data management policies (EPSRC, 2015). Which required HE to focus on the quality of research data with a view to REF2020, asset exploitation, promotion and reputation and managing research income – as well as student demand/ expectations especially following the increase in fees. So libraries are taking lead in institutional data science within the context of financial constraints and ROI and working with academics.
Developing metrics jointly with other HEIs as snowball metrics involving UK, US and ANZ as well as publishers and the metrics are open and free to use.
Kaveh Bazargan from River Valley Technologies on “Letting go of 350 years’ legacy – painful but necessary”. The company specialises in typesetting heavy maths texts. But has more recently developed publishing platforms.
It has been, in many ways, a fairly quiet week as I was working on:
exam boards as its that time of year
planning course staffing for next year so my head was buried in spread sheets for a while
researching literature on communities on Twitter and considering the role of hashtags and trending topics in generating a sense of being part of an imagined (virtual) community
in virtual meetings with various students and with Yulia Sidorova to discuss researching social media.
But I’ve mainly been feeling tired and a bit wiped out so could probably do with a break ….luckily, its the weekend!