Tag Archives: methods

IT Futures at Edinburgh

I’m attending the IT Futures conference at Edinburgh today. These notes are not intended to be a comprehensive record of the conference but to highlight points of interest to me and so will be subjective and partial.

A full recoding of the conference will be available at the IT Futures website

The conference opens with an address from the Principal, Sir Timothy O’Shea with an opening perspective:

Points to the strengths of the University in computing research, super-computing and so on, and ‘ludicrously’ strong in e-learning with 60 plus online postgraduate programmes. In these areas, our main competitors are in the US rather than the UK.

Beginning with a history of computing from the 19402 onwards. Points to Smallwood and using computers to self-improving teaching and Papert on computing/ e-learning for self-expression. 1980s/90s digital education was dominated by the OU. 1990s the rise of online collaborative learning was an unexpected development that addressed the criticisms that e-learning (computer assisted learning) lacked interactive/ personalisation elements.

2000 saw the rise of OERs and MOOCs as a form of providing learning structure around OERs. Also noted the success of OLPC in Uruguay as one of the few countries to effectively implement OLPC.

Argues that the expansion of digital education has been pushed by technological change rather than pedagogical innovation. We still refer to the constructivism of Vygotsky while technology innovation has been massive.

How big is a MOOC?
– 100 MOOCs is about the equivalent in study hours of a BA Hons. A MOOC is made up of a 1000 minnows (I think this means small units of learning. MOOCs are good for access as tasters and to test e-learning propositions. They also contribute to the development of other learning initiatives, enhance the institutional reputations including relevance through ‘real-time MOOCs’ such as on the Scottish referendum. MOOCs provide a resource for learning analytics.

So e-learning is mature, not new, and blended learning is ‘the new normal’ and dominated by the leading university brands of MIT, Stanford, etc. A huge contribution of e-learning is access.

A research agenda: to include modelling individual learning, including predictive learning support; speed of feedback; effective visualisation; supporting collaboration; understanding Natural Language; location of the hybrid boundary (eg, in practical tests); personal programming (coding) and how realistic is it for meaningful coding skills for the non-geeks to  be developed.

Open questions are around data integrity and ownership; issues of digital curation; integration of data sources; who owns the analysis; should all researchers be programmers?; and how to implement the concept of the learner as researcher?

Questions:

Question about artificial intelligence: Answer – Tim O’Shea’s initial research interest was in developing programmes that would teach intelligently – self-improving teachers – but using AI was too difficult and switched towards MIT’s focus on self-expression and for programmers to understand what their codes were doing. Still thinks the AI route is too difficult to apply to educational systems.

Q: surprised by an absence of gaming for learning?

A: clearly they can and cites Stanford on influence of games on learning motivation

Q: on academic credit and MOOCs

A: Thinks this is inevitable and points to Arizona State University which is attempting to develop a full degree through MOOCs. Can see inclusion of MOOCs in particular postgraduate programmes – heuristic of about a third of a Masters delivered via (external) MOOCs but more likely to be taken forward by more vocational universities in the UK – but using MIT or Stanford MOOCs replacing staff!.

Now moving on to Susan Halford on ‘Knowing Social Worlds in the Digital Revolution’:

Researches organisational change and work and digital innovation. Has not directly researched changes in academic work but has experienced them through digital innovation. Digital innovation has kick-started a revolution in  research through data volume, tracking, analyse and visualise all sorts of data. So data becomes no longer used to research something but is the object of social research.

Digital traces may tell us lots about how people live, live together, politics, attitudes, etc. Data capturing social activities in real time and over time rather than replying on reporting of activities in interviews, surveys and so. At least, that is the promise and there are a set of challenges to be addressed to realise the potential of these data (also see this paper from Prof Halford).

Three key challenges: definition; methods and interdisciplinarity

Definition–  what are these digital data: these are not naturally occurring and do not provide a telescope to social reality. Digital data is generated through mediation by technology and so is not naturally occurring. In the case of Twitter, a huge amount of data, but is mediated by technological infrastructure that packages the data. The world is, therefore, presented according to the categories of the software – interesting but not naturally-occurring data. Also, social media generate particular behaviours and are not simply mirrors of independent social behaviour – gives the example of the ReTweet.

Also, there is the issue of prominence and ownership of data. Survey data often is transparent in the methods used to generate data and therefore, the limits of the claims that can be made from the data. But social media data is not transparent in how it is generated – the data is privately owned where data categories and data stream construction is not transparent. We know that there is a difference between official and unofficial data. We do not know what Twitter is doing with its data but that it is part of an emerging data economy. So this data is not neutral and is the product of a series of technological and social decision-making that shapes the data. We need to understand the socio-technical infrastructure that created them.

Method – the idea that in big data, the numbers speak for themselves is wrong: numbers are interpreted. The methods we have are not good for analysis of large data. Research tends towards small scale content analysis or large scale social network analysis but neither are particularly effective at understanding the emergence of the social over time – to harness the dynamic nature of the data. A lot of big data research on Twitter is limited to mathematical structures and data mining (and is a-theoretical)  but is weak on the social aspects of social media data.

Built a tool and Southampton to dynamically map data flows through ReTweeting.

Interdisciplinariety: but is a challenge to operationalise inter-disciplinarity.

Disciplines imagine their object of study in (very) different ways and with different forms of cultural capital (what is the knowledge that counts – ontological and epistemological differences). So the development of interdisciplinarity involves changes on both sides – researchers need to understand programming and computer scientists need to understand social theory. But also need to recognise that some areas cannot be reconciled.

Interdisciplinarity leads to questions of power-relations in academia that need to be addressed and challenged for inter-disciplinarity to work.

But this work is exciting and promising as a field in formation. But also rises for responsibilities: ethical responsibilities involved in representing social groups and societies and data analytics; recognising digital data excludes those who are not digitally connected; data alone is inadequate as social change involves politics and power.

Now Sian Bayne is responding to Prof Halford’s talk: welcomes the socio technical perspective taken and points to a recent paper: “The moral character of cryptographic work” as  generating interest across technical and social scientists.

Welcomes the emphasis of interdisciplinarity while recognising the dangers of disciplinary imperialism.

Questions:

What actions can be taken to support interdisciplinarity?

A: share resources and shared commitments are important. Also academic structures are important and refers to the REF structures against people submitting against multiple subjects. (but is is pointed out that joint submissions are possible).

Time for a break ….

 

We’re back with Bernard Schafer of the School of Law talking on the legal issues of automated databases. Partly this is drawn from a PG course on the legal issues of robotics.

The main reference on the regulation of robots is Terminator but this is less worrying than Short Circuit, eg, when the robot reads a book, does it create a copy of it, does the licence allow the mining of the data of the book, etc. See the Qentis hoax. UK is the only country to recognise copyright ownership of automatically generated works/ outputs but this can be problematic for research, can we use this data for research?

If information wants freedom, does current copyright and legal frameworks support and enable research, teaching, innovation, etc? Similar issues arose form the industrial revolution.

Robotics replacing labour – initially labour but now examples of the use of robots in teaching at all levels.

But can we automate the dull part of academic jobs. But this creates some interesting legal questions, ie, in Germany giving a mark is an administrative act similar to a police caution and is subject to judicial review, can a robot undertake an administrative act in this way?

Lots of interesting examples of automatic education and teaching digital services:Screen Shot 2015-12-17 at 12.10.02

 

 

 

 

Good question for copyright law is what does ‘creativity’ mean in a world share with automatons? For example, when does a computer shift from thinking to expressing an idea which is fundamental to copyright law?

Final key question is: “Is our legal system ready for automated generation and re-use of research?”

Now its Peter Murray-Rust on academic publishing and demonstrating text or content mining of chemistry texts.

…And that’s me for the day as I’m being dragged off to other commitments.

Open online spaces of professional learning: searching for understanding  the ‘material’ of Twitter discussion events

Here are the slides from my presentation to the Social Informatics cluster group meeting of 13 June 2014.

Abstract:

Recent years have seen a growth in micro-blogging discussion events intended to support professional learning (McCulloch, et al, 2011; Bingham and Conner, 2010) communities. These events often take place on Twitter and are open to anyone using that service. The synchronous events are organised through the convention of hashtags (#) in combination with a shortened name as an explicit mechanism to aggregate contributions and enable open interactions (Bruns 2011).
This presentation will explore an initial investigation of two of these Twitter discussion event communities that both target corporate learning and development professionals. The overall study is concerned with how social discourses within a specific context emerge as sense-making and legitimation strategies around particular practices (Phillips and Hardy 2002: 25) and so will employ a multi-modal discourse analysis approach (Levine and Scullion 2004). However, the data from these Twitter discussion events does not have a transparently coherent structure as discussion sequences run coterminously and interrupt one another (Honeycutt and Herring 2009). So, with the purpose of “making sense of the data”, this presentation outlines the approaches used in identifying and analysing the key patterns of participation and structures of the Twitter discussion events. The descriptive statistical approaches suggested by Bruns (2014) are used to analyse the Twitter events and to discuss the limits of such analysis with reference to recent debates on the nature and status of ‘data’ in digital research (boyd and Crawford 2012; Baym 2013). The extent to which this kind of analysis can reveal the power and participation strategies of Twitter users in these events will be discussed.

UFHRD Conference : 4 June 2014, opening key note

The conference welcome is from Dave McGuire of Edinburgh Napier University including a short welcome video prepared by one of his students with a good number of talking heads.

The opening key note address is by Prof Jonathon Passmore [JP] with the title: “Coaching Research: The Good, the Bad and the Ugly”.

The session looks at coaching research especially on coaching in organisations and a critical review of the literature but is these on the good, the bad and the ugly and

1. why research coaching
2. what makes for good quality coaching research
3. key themes in coaching research and
4. suggested direction of research for the coming decade.

Why research is a question he asks in organisations with the response of there’s no need as “we know it works”. But coaching involves risks and there is a need to demonstrate effectiveness and ROI with positive outcomes for individuals and organisations. But this is difficult in terms of agreeing participation, problems of measurement of intangible benefits which can be difficult to publish.

The quality of research depends on the research question that is clearly defined and bounded; that the research method is appropriate, clearly described for purposes of replication and correctly executed; that results are compared with and positioned within earlier research and that conclusions are appropriate and not over-claiming as well as identifying new questions.

Critical questions to ask f the results from research is to query whether a placebo effect is occurring or that other factors contaminated the research, i.e., other training going on or selection of high performers leading to positive outcomes. Also, can the research be replicated. But few studies meet these criteria.
Can look at phases of coaching studies: phase 1 involving case studies and surveys; phase 2 involves theory development through qualitative research which is valuable in immature research areas like coaching – putting up a straw man to be challenged; phase 3 has seen initial randomised controlled trials (RCTs) and a small-scale (25 – 40 people) but provides important evidence on individual and psychological impacts; phase 4 sees larger RCTs (Passmore & Rehman 2012) and phase 5 sees an increased use of meta-analysis and includes the increase ease of access to data sources as well as the impacts of the ‘computational turn’.

These studies have identified a number of popular themes of coach behaviour attracting lots of papers as did the coach-client relationship. But only limited research on client decision-making on coaching and an increase in research on the impact of coaching.

Coach behaviour research, e.g., Hall et al (1999) involving interviews of coaches and clients identified some tentative behaviours but has been validated by future studies especially around the discursive and collaborative approaches and the power relations and dynamics to work collaboratively. Probing and challenge is an emerging area as a distinction from the empathy focus of counselling. JP cites client work and that senior leaders relish challenge. Aspects of confidentiality are critical to effective coaching including risky behaviour as well as commercial confidentiality and maintaining professional distance is also important in the evidence of effective coaching.

Literature on the coach and coach relationship focus on the develop of an alliance between coach and coach but little evidence of what factors make a successful relationship although these can be inferred from other studies, e.g., empathy

Outcome studies (McGivern et al 2001) as a ROI study based on Jack Philips method of ROI leading to an estimate based approach and then decided to cut the number in half – although this was not really justified. JP assessed this as twaddle and rubbish and we need different methods for HRD (the bad research).

Identified 156 outcome studies between 1998 & 2010. Of these, most are small-scale with 30 or so participants and some RCTs. Miller used quasi-experimental study and found no statistical significance on a beneficial impact of coaching but this may be that the coaching intervention was limited and didn’t lead to behavioural change or that managers tended to revert to a more directive styles. Also. a lot of RCT studies involve students not in organisations but these did show psychological benefits of coaching around resilience and mental health. Passmore & Rehman (2012) RCT of military drivers found that a coaching approach reduced time for training and success rates increased.

Some outcome studies have involved longitudinal research evidencing a longer-term effect of coaching that may indicate that coaching is more effective, deeper learning and greater behavioural change than training interventions.

But coaching still only has a small number of studies and these have a small sample sizes compared to studies in health settings. e.g., conducting RCTs in organisations is difficult. Also, the isolation of variables and factors of interest can be difficult (Hawthorne effect), outcome study methods are often not fully described and that research is often undertaken by champions of coaching with inevitable biases.

Meta-analysis research, e.g., De Meuse, Dai and Lee (2009) but this was only based on four papers only so interesting in terms of being a meta-analysis but based on very little data (the ugly). Teeboon, et al (2013) and Jones (in press) more robust papers. Teeboon found positive benefits  around factors such as coping, goal directed and self-regulation, performance, attitudes and well-being at about the same level as other L&D interventions. So coaching is one of a number of effective interventions available for L&D practice. Jones study is of 24 RCT studies and looked at effect size on style of coaching and found a larger effect size of internal coaches compared to external coaches. Jones found that coaching had a medium to strong positive impact but the findings should be treated with caution given the small number of papers used.

The future of coaching research may be dominated by either (a) a business school use of case studies; (b) an organisational psychology approach model that disconnects scholarship from practice; (c) a medical based approach with an emphasis on evidence based practice that informs experts including scholar-practitioners.

Research needs to aim for larger RCTs involving random allocations involving two or more interventions, a control group and placebo group. Research needs to identify factors for effective coaching. Need larger scale meta-analysis to identify impact effect sizes.

This will improve understanding of efficacy and appropriateness of coaching or other interventions  and then which approaches to coaching are appropriate for different needs and which coaching behaviours are most effective. Also, identifying when is a client ready for coaching in terms of the individual and the organisation (i.e., as managerial support and a supportive culture). Lastly coach behaviour research underpins PG programmes and by professional body competence.

Network Learning Conference

More from the Network Learning Conference with Peter Jandric on Research Methods & the post-disciplinary challenge of network learning. Good research has an ‘itch to scratch’. In the case of network learning, there is a range of different methodological approaches. Raises the question on how to compare and synthesise different approaches in network learning?
The Rise of Disciplinarity: Ancient Greece had no discplinary borders but as knowledge became more complex so disciplines began to emerge eg, the seven liberal arts identified in the 7th century that still form the structure of humanities disciplines in western HE to the present day. The liberal arts articulated as the education of a gentleman by C19th (Parker 1890) implying other educations suitable for others, eg, vocational. So disciplinarity became linked to issues of class and culture.
Linking disciplinarity and technique – as human techniques develop, there is increased complexity and so we need more disciplines to cope. But this leads to fragmentation between disciplines as the restrictions of specialisation missing the bigger picture. But also disciplines must therefore, shape how we perceive the future possibilities.
Disciplinarity and the network: radical changes in science occurs through ‘blue skies research’ led by superstar scientists that is formally recognised. But what gets funded is applied research (STEM etc.).
New fields of research such as environmental science and network learning that is postdisciplinarity (see Buckler 2004). The diversity of the field requires diverse knowledge. This opens up large opportunities for forming connections between disciplines and research methods but faces large epistemological challenges.
Four postdisciplinary approaches:
1. multi-disciplinary learning, eg, through technology studies, through learning theory
2. interdisciplinarity seeks integrative results through different methods
3. transdisciplinarity seeks to inform and transform research through integrating disciplines
4. antidisciplinarity where disciplines abandoned entirely.
All these approaches opens up questions on the nature of inquiry on network learning. Points to the importance of being critically conscious of the way we inquire in to network learning.

Q. is antidisciplinarity feasible given strengths of disciplines but also if no disciplinary boundaries that is an interesting space to be in?
A. cites example of HIV AIDs as educationallisation of medicine eg, through preventative awareness raising.

Point made that network learning is a field which people bring their disciplines to.

Cathy Adams and Terrie-Lynn Thompson on materialities of posthuman inquiry. Have you considered the tools used in research may also shape that practice? These can be positive and negative on the research process. But academic expertise is bound up with technologies used daily and shapes that practice and their performative outcomes. Digital technologies are the encoded materialities of academic practice. Looking at the insights provided from Actor Network Theory and from phenomenology. Ingold explores the link between materiality and phenomena in correspondence. ANT subject-object separation are undermined through symmetry while in phenomenology, subject-object division becomes translucent.
Research practice assemblages of long lists of tools for diffusion, search engines, storage tools, visualisation software, etc… Enrolled in the research practice through digital traces including digital artefacts. So digital devices may participate and co-research in research storing, sharing and extending data. This deccentres the human expert in elicit and generate data and can be dynamic leading to movement and slippage. So the researcher is deskilled as research outsourced to digital tools and upskilled eg, in research data curation.
NViVo presented by QSR as a solution to the ‘problem’ of qualitative research but the software may configure and surcumscribe research practice (see Introna 2012). Research found NViVo enhanced the quality of data while reducing the tactility of research and enhancing the position of the technologist. Researchers found that demands of NVivo overtook the intent of the research. Researchers must subscribe to the methodological assumptions and structures of the software.
What are the implications of encoded research practices for researching networked learning. That the non-human actors should be treated as part of the research team – their views taken in to account.
Fluencies can be seen in
1. Agency as researchers is shared with encoded actors as entanglements
2. Research practices undergoing deskilling and upskilling including through the attraction of delegation
3. New enactments of data
4. Scale, mobility and scale of data reconfigured.

Points of friction: research defined by technologies; perceived as less objective is less techie; attraction of exotic tech; outsourcing of research tasks; increase in expectations of speed of research.

Q. push back on issue of symmetry and there is a qualitative difference between human and non-human in the research assemblage. That a telescope allows us to see the moon but is not a co-researcher
A. argues that the non-human component enhances the researcher. In the case of encoded technologies, the algorithm is too often black-boxed but its impact on the research process needs to be opened up.

And running out of steam now but worth following the tweets here