AI’s impact: From global tech changes to classroom opportunities – www.elizabethton.com

0
AI’s impact: From global tech changes to classroom opportunities – www.elizabethton.com

AI’s impact: From global tech changes to classroom opportunities

Published 10:22 am Friday, May 17, 2024

The serious threat of misinformation. The promise of sweeping changes to an array of technology. The opportunity for students at every education level to develop and hone new skills – or cheat on assignments.

All are possibilities with artificial intelligence (AI), once a bad joke in science fiction novels but now a reality for millions across the globe.  With technology churning ahead at an incredible pace, East Tennessee State University turned to experts to get a sense of the good, the bad and all that’s in between when it comes to AI.

A new world in marketing
A globally respected scholar, Dr. Stephen Marshall is a former chair of ETSU’s Department of Media and Communication and the current chief marketing officer of the ETSU Research Corporation.

In 2023, he earned a prestigious appointment to the Digital Marketing Institute, which includes representatives from Meta, LinkedIn, Coca-Cola and more.

As such, he’s enjoyed briefings and presentations from world experts on AI. A major takeaway: This work is changing and will continue to profoundly change the world of marketing.

“From planning and brief writing to creative development, campaign management, analysis and optimization, AI is augmenting the digital platforms,” said Marshall. “By 2026, the exponential capabilities of AI should increase model capabilities by 10k times what they are now.”

For Marshall, news headlines about AI may sometimes be overblown. That’s because digital marketers have harnessed the power of AI for years.

Ever received an automated email? Noticed an article in your social media feed that seemed unbelievably tailored to your interests? AI played a role in that.

“From a consumer perspective, AI will continue to improve digital experiences through personalization,” he said. “From an industry perspective, AI will continue to optimize every aspect of our ability to gain attention, consumer action and brand loyalty through more efficient content creation as well as more actionable insights, analytics and marketing automation.

”Plenty of companies have utilized AI, including Starbucks and Sephora, offering customers tailored virtual assistants, as have institutions of higher learning, including ETSU. “As the language models improve,” said Marshall, “so will the customer experiences.”

Academic honesty (and dishonesty) 
AI is a lot like fire, according to Marshall.

“It can heat your house or burn it down,” he said. “You have to know how to use it properly.”Questions of cheating – what constitutes it, how comprehensive it is and how educators stop it – have been an important and persistent focus for ETSU. “In my graduate courses, I have only suspected AI use once or twice,” said Dr. Paul Garton, an assistant professor in the Clemmer College of Education and Human Development. “What AI forces instructors to do is to design assignments that demonstrate course mastery, which assignments generally should do anyway. The trick becomes, is it the role of higher education or employers to distinguish between which students used AI appropriately and which abused AI?”

Issues for Higher Education 

Garton, who earned a Ph.D. in higher education from Michigan State University, is deeply concerned about fraud. But not always in the sense of student cheating.

He expects phishing schemes – a deceptive tactic to get a user to share sensitive information – and other threats to information security to increase tremendously, both in volume and believability.

He added: “We can expect more online harassment of faculty who conduct research in Information technology (IT) is already one of the big cost drivers of modern higher education – the essential investments in IT security will only increase those costs controversial areas, particularly faculty who are women or a person of color. Colleges and universities must be prepared to support and protect faculty who study important topics of the day.”
Colleges and universities across the country pour tremendous resources into digital-related protection.

For good reason.

Ransomware attacks against institutions of higher education are growing.

For Garton, the academic publishing enterprise appears ill prepared for AI-generated scholarship, too.

“Predatory journals already regularly publish low-quality scholarships from faculty who are under tremendous pressure to have a publication,” he said. “The organizational incentives of publish or perish, along with the competitive and zero-sum nature of journals, will incentivize faculty to use AI to pump out writing of dubious quality without proper attribution to AI.”  

Training students at ETSU 
Faculty at ETSU include AI in their curriculum. The goal is to help students move from enrolled to employed, a centerpiece of ETSU’s approach to education in a world that will certainly include AI.

Marty Fitzgerald, a professor in the university’s Digital Media program – ranked No. 1 in Tennessee for animation – has taught several courses that prominently feature AI.Right now, he is pilot-testing an AI bot in a graduate consumer behavior course. Students have trained it on ETSU course materials and readings with a goal of offering it as a type of 24/7 office hours.

“You can ask it when assignments are due, for instance, or how to write in APA style,” said Fitzgerald, a successful professional at the respected Academy of Art in San Francisco who also owned a graphics and animation business before coming to ETSU. In a fall 2023 course titled “AI Applications and Understanding,” he and his students did a range of testing to see what AI was capable of, especially in terms of enhancing workflows. Instagram users can take a look at what the students produced.

Across multiple disciplines, instructors are emphasizing the importance of training students on how to properly harness this technological power – including how to do so ethically.

“In educational settings, generative AI presents both opportunities and challenges. While it can inspire creativity and assist with learning by providing new ideas or explaining complex topics, some students might rely too much on AI for completing assignments without fully understanding the material,” said Dr. Brian T. Bennett, chair of the Department of Computing. “Hence, educators should embrace the technology while emphasizing the importance of responsible use and ensuring it serves as a tool for learning rather than a shortcut to completion.” 

Dr. Alison L. Barton, director of the ETSU Center for Teaching Excellence, has worked with Director of Academic Technology Anthony Kiech on a range of presentations to faculty explaining the pros and cons of AI. 

“ETSU’s Center for Teaching Excellence is constantly learning about and exploring with faculty new and innovative ways to use AI for teaching and learning, while also sharing ways instructors can discourage students from using AI to bypass the learning process,” she said.

Spreading misinformation 
Dr. David Harker, who took over as chair of the Department of Philosophy and Humanities in 2022, has devoted much of his career to studying expertise, skepticism and misinformation.

His 2015 book, “Creating Scientific Controversies: Uncertainty and Bias in Science and Society,” earned rave reviews, highlighting the importance of clear critical thinking and quality decision-making.

He acknowledges the wealth of good that can come from AI. But its ability to spread misinformation is what concerns him most. 

“As we all know, the internet is awash with almost unfathomable quantities of information. Unfortunately, much of it is inaccurate, misleading, sometimes dangerous and often extremely hard to distinguish from the information that should be guiding individual and collective decision-making,” Harker said. “While the challenges are already acute, AI holds the capacity to make matters exponentially worse.”

Programs such as ChatGPT, developed by OpenAI and designed to assist or answer a wide range of questions by usually offering text-based content, are perhaps the worst offender for Harker.  

Dr. Ahmad S. Al Doulat, an assistant professor of data science in the Department of Computing, shares similar concerns. “Misinformation is a significant concern in the digital age, and AI amplifies this issue by generating vast amounts of content, sometimes without proper verification or context. ChatGPT and similar AI models have the potential to flood the internet with misleading or inaccurate information, making it challenging for individuals to discern truth from falsehood,” he said. “As educators, it’s crucial to equip students with critical thinking skills to navigate this sea of information effectively. Teaching them to question sources, evaluate credibility and corroborate facts can help mitigate the spread of misinformation facilitated by AI.”

And AI can be convincing – even when the information is false or damaging.

“Regardless of a model’s fine-tuning, training AI with data from the internet presents a set of challenges. One major issue is bias – since the internet contains all sorts of information, including inaccurate or unfair content, the AI can inadvertently learn these biases,” said Bennett. “Moreover, these models can become overly confident in their incorrect answers or create completely new, incorrect information, a problem known as ‘hallucination.’ Developers must continuously work to improve and correct large language models to ensure they are as accurate and unbiased as possible.”

 

link

Leave a Reply

Your email address will not be published. Required fields are marked *