[ad_1]
On 30 November, 2023, the Australian federal government released its Australian Framework for Generative AI in Schools. This is an important step forward. It provides much-needed advice for schools following the November 2022 release of ChatGPT, a technological product capable of creating human-like text and other content. This Framework has undergone several rounds of consultation across the education sector. The Framework does important work in acknowledging opportunities while also foregrounding the importance of human wellbeing, privacy, security and safety.
Out of date already?
However, in this fast-moving space, the policy may already be out of date. Following early enthusiasm (despite a ban in many schools), the hype around generative AI in education is shifting. As experts in generative AI in education,researching it for some years now, we have moved to a much more cautious stance. A recent UNESCO article stated that “AI must be kept in check in schools”. The challenges in using generative AI safely and ethically, for human flourishing, are increasingly becoming apparent.
Some questions and suggestions
In this article, we suggest some of the ways that the policy already needs to be updated and improved to better reflect emerging understandings of generative AI’s threats and limitations. With a 12-month review cycle, teachers may find the Framework provides less policy support than hoped. We also wonder to what extent the educational technology industry’s influence has affected the tone of this policy work.
What is the Framework?
The Framework addresses six “core principles” of generative AI in education: Teaching and Learning; Human and Social Wellbeing; Transparency; Fairness, Accountability; and Privacy, Security and Safety. It provides guiding statements under each concept. However, some of these concepts are much less straightforward than the Framework suggests.
Problems with generative AI
Over time, users have become increasingly aware that generative AI does not provide reliable information. It is inherently biased, through the biased material it has “read” in its training. It is prone to data leaks and malfunctions. Its workings cannot be readily perceived or understood by its own makers and vendors; it is therefore not transparent. It is the subject of global claims of copyright infringement in its development and use. It is vulnerable to power and broadband outages, suggesting the dangers of developing reliance on it for composing content.
Impossible expectations
The Framework may therefore have expectations of schools and teachers that are impossible to fulfil. It suggests schools and teachers can use tools that are inherently flawed, biased, mysterious and insecure, in ways that are sound, un-biased, transparent and ethical. If teachers feel their heads are spinning on reading the Framework, it is not surprising! Creators of the Framework need to interrogate their own assumptions, for example that “safe” and “high quality” generative AI exists, and who these assumptions serve.
As a policy document, the Framework also puts an extraordinary onus on schools and teachers to do high-stakes work for which they may not be qualified (such as conducting risk assessments of algorithms), or that they do not have time or funding to complete. The latter include designing appropriate learning experiences, revising assessments, consulting with communities, learning about and applying intellectual property rights and copyright law and becoming expert in the use of generative AI. It is not clear how this can possibly be achieved within existing workloads, and when the nature and ethics of generative AI are complex and contested.
What needs to change in the next iteration?
- A better definition: At the outset, the definition of generative AI needs to acknowledge that it is, in most cases, a proprietary tool that may involve the extraction of school and student data.
- A more honest stance on generative AI: As a tool, generative AI is deeply flawed. As computer scientist Deborah Raji says, experts need to stop talking about it “as if it works”. The Framework misunderstands that generative AI is always biased, in that it is trained on limited datasets and with motivated “guardrails” created largely by white, male and United States-based developers. For example, a current version of ChatGPT does not speak in or use Australian First Nations words, for valid reasons related to the integrity of cultural knowledges. However, this indicates the whiteness of its “voice” and the problems inherent in requiring students to use or rely on this “voice”. The “potential” bias mentioned in the Framework would be better framed as “inevitable”. Policy also needs to acknowledge that generative AI is already creating profound harms, for example to children, to students, and to climate through its unsustainable environmental impacts.
- A more honest stance on edtech and the digital divide: A recent UNESCO report has confirmed there is little evidence of any improvement to learning from the use of digital technology in classrooms over decades. The use of technology does not automatically improve teaching and learning. This honest stance also needs to acknowledge that there is an existing digital divide related to basic technological access (to hardware, software and connectivity) that means that students will not have equitable experiences of generative AI from the outset.
- Evidence: Education is meant to be evidence-informed. Given there is little research that demonstrates the benefits of generative AI use in education, but research does show the harms of algorithms, policymakers and educators should proceed with caution. Schools need support to develop processes and procedures to monitor and evaluate the use of generative AI by both staff and students. This should not be a form of surveillance, but rather take the form of teacher-led action research, to provide future high-quality and deeply contextual evidence.
- Locating policy in existing research: This policy has missed an opportunity to connect to extensive policy, theory, research and practice around digital literacies since the 1990s, especially in English and literacy education, so that all disciplines could benefit from this. The policy has similarly missed an opportunity to foreground how digital AI-literacies need to be embedded across the curriculum, supported by relevant existing Frameworks, such as the Literacy in 3D model (developed for cross curricular work), with its focus on operational, cultural and critical dimensions of any technological literacy. Another key concept from digital literacies is the need to learn “with” and “about” generative AI. Education policy needs to reference educational concepts, principles and issues, also including automated essay scoring, learning styles, personalised learning, machine instruction and so on, with a glossary of terms.
- Acknowledging the known dangers of bots: It would also be useful for policy to be framed by long-standing research that demonstrates the dangers of chatbots, and their compelling capacity to shut down human creativity and criticality and suggest ways to mitigate these effects from the outset. This is particularly important given the threats to democracy posed by misinformation and disinformation generated at scale by humans using generative AI.
- Teacher transparency: All use of generative AI in schools needs to be disclosed. The use of generative AI by staff in the preparation of teaching materials and the planning of lessons needs to be disclosed to management, peers, students and families. The Framework seems to focus on students and their activities, whereas “academic integrity” needs to be modelled first by teachers and school leaders. Trust and investment in meaningful communication depend on readers knowing the sources of content, or cynicism may result. This disclosure is also necessary to monitor and manage the threat to teacher professionalism through the replacement of teacher intellectual labour by generative AI.
- Stronger acknowledgement of teacher expertise: Teachers are experts in more than just subject matter. They are expert in the pedagogical content knowledge of their disciplines, or how to teach those disciplines. They are also expert in their contexts, and in their students’ needs. Policy needs to support education in countering the rhetoric of edtech that teachers need to be removed or replaced by generative AI and remain only in support roles. The complex profession of teaching, based in relationality and community, needs to be elevated, not relegated to “knowing stuff about content”.
- Leadership around ethical assessment: OpenAI made a clear statement in 2023 that generative AI should not be used for summative assessment, and that this should be done by humans. It is unfortunate the Australian government did not reinforce this advice at a national policy level, to uphold the rights of students and protect the intellectual labour of teachers.
- More detail: While acknowledging this is a high-level policy document and Framework, we call for more detail to assist the implementation of policy in schools. Given the aim of “defining what safe, ethical and responsible use of generative AI should look like” the document would benefit from more detail; a related education document from the US runs to 67 pages.
A radical policy imagination
At the 2023 Australian Association for Research in Education (AARE) conference, Jane Kenway encouraged participants to develop radical research imaginations. The extraordinary impacts of generative AI require a radical policy imagination, rather than timid or bland statements balancing opportunities and threats. It is increasingly clear that the threats cannot readily be dealt with by schools. The recent thoughts of UNESCO’s Assistant Director-General for Education on generative AI are sobering.
A significant part of this policy imagination needs to find the financial and other resources to support slow and safe implementation. It also needs to acknowledge, at the highest possible level, that if you identify as female, if you are a First Nations Australian, indeed, if you are anything other than white, male, affluent, able-bodied, heterosexual and compliant with multiple other norms of “mainstream” society, it is highly likely that generative AI does not speak for you. Policy must define a role for schools in developing students who can shape a more just future generative AI, not just use existing tools effectively.
Who is in charge . . . and who benefits?
Policy needs to enable and elevate the work of teachers and education researchers around generative AI, and the work of the education discipline overall, to contribute to raising the status of teachers. We look forward to some of the above suggestions being taken up in future iterations of the Framework. We also hope that all future work in this area will be led by teachers, not merely involve consultation with them. This includes the forthcoming work by Education Services Australia on evaluating generative AI tools. We trust that no staff or consultants on that project will have any links whatsoever to the edtech, or broader technology industries. This is the kind of detail that may help the general public decide exactly who educational policy serves.
Generative AI was not used at any stage in the writing of this article.
The header image was definitely produced using Generative AI.
An edited and shorter version of this piece appeared in The Conversation.
Lucinda McKnight is an Australian Research Council Senior Research Fellow in the Research for Educational Impact Centre at Deakin University, undertaking a national study into the teaching of writing with generative AI. Leon Furze is a PhD Candidate at Deakin University studying the implications of Generative Artificial Intelligence in education, particularly for teachers of writing. Leon blogs about Generative AI, reading and writing.
Republish this article for free, online or in print, under Creative Commons licence.
[ad_2]
Source link