On November 30, 2022, we experienced the “iPhone moment of AI”. Here is my attempt to provide brief answers to the most important questions. What do university teachers need? What should universities and education policies do? What will happen to higher education? What should be the guiding principle? What about exams? What about the negative impacts? Where does the human factor fit in? Conclusion: We cannot effectively address the challenges of implementing AI in higher education if we don’t take into account the context (goals, policies, and culture of higher education).

1. What do university teachers need?

Teachers will have to increasingly and deeply engage with AI, Large Language Models (LLMs), and their significance for academic writing. The possibilities for this are growing. Handbooks, recommendations, and practical reports have already been compiled and curated in many cases. Webinars offer expert inputs and discussions. Remarkably, there is often a strong focus on exchange and networking among practitioners in the scientific disciplines.

2. What should universities and educational policy do?

Universities and educational policy should now pave the way for providing AI technology to all members of the university for exploration, experimentation, and usage. Even from small solutions, productive experimental spaces for AI in education can develop. It should be the task of supporters in university and media didactics as well as university development to provide not only information but also guidance.

3. What will become of higher education?

The core tasks of education (in the emphatic, so-called “Humboldtian” sense) will remain unchanged. The ideal remains the autonomous citizens who, as participating, full developed individuals, develop and utilize their specialized knowledge in a communicative, cooperative, creative, and critical manner. However, the advent of AI requires the development and integration of new content. Basic and specialized knowledge, skills, and vocational orientations of higher education must be reviewed and expanded with regard to AI. It is crucial that nothing is left out and no untouchable aspects exist. This applies particularly to the function and perspective of text production.

4. Orientation to what?

The simple answer is: to the quality. In the presentations by, for example, Anika Limburg or Doris Weßels – but also others – there always seems to be an axis on which the use of AI ranges from technology-driven “avoidance of learning” to “support” and ultimately to operational “co-creation” when working with AI. The litmus test is: who or what is steering the process? The task is to counteract avoidance behavior. The second message is that good co-creative work with AI greatly enhances the quality – but also the effort. The same is likely to apply when using it for teaching by instructors…

5. What will happen to exams at the university?

Apparently, not much is happening (yet?). After the initial shock (“Homework is dead”), necessary adjustments (declarations of independence, complex questions, controlled settings, etc.) are being discussed. There seems to be more of a tendency to save traditional exams instead of the hoped-for fundamental rethinking. It is really sad that this is happening mostly with administrative-legal-economic arguments, while there is a general consensus that the prevailing examination culture should be re-evaluated.1 ,2

6. What about the negative impacts?

AI and ChatGPT are more of a projection screen for digital discomfort rather than the cause of it. The dystopian narratives once again focus on the erosion of the scientific system, the devaluation of the scientific profession, the loss of social connections and participation, as well as the so-called de-skilling of students. The exacerbation of the global and local digital divides is to be feared. Additionally, the private-commercial, solutionist orientation of powerful players also makes AI vulnerable to psychopathic actors even before a mega-AI takes over world domination.

7. Where is the human?

We may speculate about the question of what AI means for the future of humanity, but that’s about all we can (and should) do. Instead, we can direct our attention to the present, where humans are already disappearing from view. There are the click workers in the Global South who train the AIs, those who have created all the material that is analyzed by the algorithms of the chatbot, and the number of computer scientists who contribute directly or indirectly to the creation of ChatGPT. ChatGPT has absorbed the labor and creative power of all these people and bundled it in the hands of a company. If we remain aware of this, we also realize that the machine does not replace humans, but alienates them from us and their products in a sharp and new way.

Conclusion

Also ChatGPT shows how large the gap is between real (technical) disruption and potential (pedagogical) innovation. We cannot successfully manage the challenge of implementing AI in higher education if we don’t consider the context (goals, policies, culture of higher education). As we critically analyze the present and look optimistically towards the future, we should continue to push for asking the relevant questions, expanding the scope of the discussion, and identifying courses of action and their consequences.

  1. Notabene I: I wonder when it will be recognized that the “oral examination” is highly questionable in terms of quality criteria and, if conducted properly, involves a significant amount of additional effort. []
  2. Notabene II: I don’t understand the, pardon my language, stupid attempts like “ChatGPT passes XY exam” at all. After all, the bot has been fed with huge amounts of specialized literature, exam questions, and probably entire test exams. So why is it surprising that it produces correct solutions to standard exams? []

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top