Not just ChatGPT

Artificial intelligence is becoming more and more common in daily life and progresses by leaps and bounds that are difficult to predict. Universities encourage the use of AI applications, but in an open and responsible manner.

Text terhi hautamäki images Outi Kainiemi translation marko saajanaho

“In the familiarisation phase.”This is how Jyväskylä University School of Business and Economics Dean, Professor Hanna-Leena Pesonen describes her relationship with generative language models. She has utilised ChatGPT to create summaries and provoke thought.

“It helped me quite nicely to organise my speech for one event. I did not use AI-generated text directly, but instead I took a couple of different versions”, Pesonen recounts.

When ChatGPT-3 was released in the 2022 holiday season, its use immediately became a talking point at the School of Business and Economics. In January 2023, the JSBE revealed their policy. The discussion started with the risks – how ChatGPT could be abused in one’s studies and whether its use should be banned.

“We ended up deciding a ban was not reasonable because we have no means of controlling it. This is a very important working life skill, and we must be instructing people on its responsible use. The policy handed responsibility to teachers to ensure they create assignments that are impossible to do with AI alone.”

AI may be used in assignments and theses, but the student must report how it is used and for what purpose. Other universities and units have similar guidelines – encourage AI use but require transparency and responsibility.

Rapid leaps and bounds

AI has been everywhere for quite some time, including regular consumer products such as vehicle cruise control systems and phone cameras.

However, it was the emergence of generative AI such as text-generating ChatGPT and Google Gemini (formerly Bard), image-generating Midjourney and DAll-E, and the neural network-based DeepL translation system that has turned people into active AI users that also understand they are using AI.

AI has progressed by leaps and bounds in tasks that, up until recently, were thought to require human reasoning. Open-source language models are under development even here in Finland. Familiar software suites have also received an injection of AI, one example being Microsoft Copilot.

Software using machine learning plays a major role in data collection analysis, such as processing extensive medical data. Hanna-Leena Pesonen reckons that more uses for AI will be found in university administration, for example. Ideally, being able to automate executive tasks would leave more time for human interaction. If AI facilitated more independent lecture studying, student-teacher meetings could focus on discussion and finding understanding.

“Hopefully, university administrators would also have more time for meeting people.”

”Hopefully, university administrators would also have more time for meeting people.”.”

Hanna-Leena Pesonen, Dean, Professor, Jyväskylä University School of Business and Economics

Aalto University Professor of Practice Lauri Järvilehto, who researches thinking and future work, says any wild vision about AI development is likely too conservative and too bold at the same time. 

“The ways of technological progress are unpredictable. Instead of unnecessarily hyping up the future, we could focus on what these gadgets can do at present.”

Aalto has also outlined constraints and encouraged AI use. According to Järvilehto, it would be very counterproductive to fully ban the use of ChatGPT, for example, as there is no way to verify it has been used.

“If its use is prohibited, only unethical actors will use technology that has already been proven to increase productivity.”

A language model is not a search engine

The major risk with generative language models is an increase in false information, images, and videos. If the AI’s learning data is biased, those bias will be repeated. The major language models focus mainly on English-language material. The language model hallucinates and can be manipulated. This unreliability can raise the question whether AI has any use in scientific work.  

Järvilehto says language models are often misconstrued as search engines despite currently being quite poor at that job. Language models are word-guessing machines that do not produce excellent answers out of the blue. ChatGPT is best suited for tasks in which the user feeds their own material to it. When the user knows the facts, the AI organises and verbalises them – producing, for example, marketing material for a study module or text to serve as a basis for a presentation. It is essential to format the prompt correctly. The language model needs the right context. There is no reason to settle for the first response, as the prompt can be specified further.

When Järvilehto and his team created new educational objectives for the university unit, 20 were generated. That was too many. They took the material to ChatGPT and requested five or six objectives. First, they received consulting jargon. Then they asked for clear language and avoiding trivialities.  

“Within a couple of goes, we had our objectives. The team was utterly enthralled.”  

“Within a couple of goes, we had our objectives. The team was utterly enthralled.”  

Lauri Järvilehto, Professor of Practice, Aalto Univeresity

Järvilehto also uses the language model for article summaries. When doing so, he explains what he is doing and requests a summary of the most important finding in that context.

Nothing does critical thinking for you, but AI can complete tasks and help with sparring. The use of proprietary material is restricted by data protection, as ChatGPT does not accept confidential information. Järvilehto is aware of experiments in which AI judged essays with evident success, but such uses are stopped by data security and legal protection issues.

Not just language models

University of Helsinki Professor of Computer Science Teemu Roos leads AI Education at the Finnish Center for Artificial Intelligence (FCAI) and serves as one of the leaders of the Generation AI project. This project, funded by the Strategic Research Council, performs research to serve as the basis of AI and security education for children and young people.

Roos says applications for AI can be found all across university tasks – research, teaching, and social interaction alike.

According to Roos, the emergence of language models is already evident in the academic world, and it is not all positive. Some articles have cited non-existent sources. When appraising articles, Roos has occasionally noticed a “sales-oriented” style that sounds like ChatGPT.

“There is concern associated with AI use, perhaps excessively so. However, it is important to consider the ground rules and the ways these applications may be used to assist learning.”

“There is concern associated with AI use, perhaps excessively so. However, it is important to consider the ground rules and the ways these applications may be used to assist learning.”

Teemu Roos, Professor of Computer Science, Helsinki University

Roos points out artificial intelligence does not equal ChatGPT. Most AI software is based on something other than language models, such as statistical analytics for processing numerical data. When the Department of Computer Science develops tools for plasma physics modelling, for example, language models are of no use. The major advantage of language models is the fact words can be used to communicate with AI. They can handle tasks that would previously have required coding. If certain information needs to be picked from data and part of this information is more important than the rest, a verbal request can be made instead of having to code weighting factors.

According to Roos, the use of natural language facilitates the use of new systems, skills upgrading, and perhaps even requalification.

“It has been surprising to see how tasks thought to be impossible for AI a few years ago are handled as well as they are. It is hard to say what the development curve is going to look like. This is a bumpy ride.”

Extra eyes and ears for researchers

University of Oulu University Lecturer (tenured), Educational Technologist, and teacher educator Jari Laru compares the ongoing shift to the mobile revolution of the 2000s. Currently, he is the Generation AI project’s interaction specialist.

“I haven’t been this excited since I was writing my doctoral thesis back in 2003–2010. That was some proper geek stuff, and now here we go again”, Laru says.

This does not mean an uncritical attitude towards the development. He considers the addictive quality of social media and games to be the most harmful to children and young people, which itself is based on AI. Artificial intelligence requires massive computational capacity and data centres, which means draining electricity and natural resources. There are ethical and data protection concerns, but also major opportunities.

Laru finds it pointless to focus on how to prevent cheating in essays.

Educators must change their pedagogical practices to function in the time they’re working in.” 

Jari Laru, Educational Technologist, and teacher educator, University of Oulu

“What is the point of making assignments whose answers can easily be generated with AI? Educators must change their pedagogical practices to function in the time they’re working in.”

Laru has attended meetings and held lectures where each participant speaks their own language and is translated by AI to the others’ languages as text. Transcribing material becomes easier, and managing masses of data is improved. AI assists in information searches, library use, and literature analysis.

For instance, the Research Rabbit tool utilises the PubMed or Semantic Scholar search index and shows references between articles and authors as a visual map, whereas Covidence is a systematic review application that makes use of machine learning to assist in tasks such as citation reviewing and double-checking.

Laru’s colleagues are working on research in which AI analyses video recorded in a learning situation and observes body responses recorded by sensors such as heart rate monitors. Data from this analysis helps identify phases and situations during learning where motivation is low, or cooperation fails to go smoothly. AI gives researchers extra eyes and ears.

“You should not just reactively devise rules on proper application use. You have to be proactive in considering how academic expertise should be developed. AI is a tool. It is not the master of the house, it is a rake, tractor, or hammer of the cognitive and social level.”

Major political issue

According to Teemu Roos, AI development is also a political question that researchers across different fields should be able to follow. To him, people lacking information on influencing important development paths is a problem.

The European Parliament recently approved the Artificial Intelligence Act containing requirements on developing and publicising extensive language models. People in academia were concerned about this act’s effect on open-source language model development, for example.

According to Roos, even experts in the field are unclear on the eventual effects of the act. In the preparatory stage, even AI experts failed to reach a consensus on what actually constitutes artificial intelligence and what does not. Since it is unknown how AI might develop, it is also unknown what must be addressed through regulation. Some say talking about risks is obstruction and opportunities should be the focus.  

“Those not at the sharp end of the risks may express this opinion. Those with commercial interests may not be as interested in the risks as those who are considering the realisation of human rights, equality, and fairness”, Roos says.

No more optimisation, right?

AI is said to optimise work to improve its efficiency, but few people within universities wish to hear another word about optimisation. It has only increased pressure and expectations on the amount of output. When considering AI use on work tasks, it is pertinent to think what the end goal is.

“People will either have more time to do things they find meaningful, or we will be expected to produce twice as much whether or not it is meaningful”, says Professor Teemu Roos from the University of Helsinki.

The human brain works at the same rate as it always has. Text and images flashing onscreen faster and faster is not enough for in-depth comprehension. Asking AI for a summary or text draft is very different from concentrating on reading and structuring one’s thoughts through writing.

“There will be more numerically quantifiable results, and publishing speed will increase. When will the person stop to look at the mass of data?” ponders University of Oulu University Lecturer (tenured) Jari Laru.

He would rather talk about increasing creativity than about optimisation. Laru believes there will be cookie-cutter science, art, and literature, but the need for slower thinking will not disappear.

“Modern-day working life is terribly rigorous. If any time is freed up and this becomes public knowledge, the upper level will certainly fill that void. From the employer’s point of view, efficiency gains mean you can publish more or teach more courses.”

What if freeing up time from executive work was seen in a different way, and blocks for calm thinking could be added to the calendar? Aalto University Professor of Practice Lauri Järvilehto says he is already marking these in his calendar but would like more space for them.

“It is crucial that work performed by AI is not replaced with some new pseudo-work such as meetings or fiddling with email, and instead the freed time can be used for something like a walk in the park and staring into the distance.”

Could an increase in calm thinking be followed by a genuine leap in productivity, which both working life and humanity itself long for?

Recommended articles