The Role of GenAI in Higher Education in the Applied Disciplines

Written with Christine Loveridge, as part of BlueDot’s AI Governance Course

July 2024

Professional practice in many disciplines is being transformed by AI, and higher education needs to keep pace. Many universities, to their credit, have recently adopted guidelines about AI usage in education, but they often fall short. Some myopically ban student usage of AI on assessments, while others, despite embracing AI in theory, fail to offer concrete guidance on how to prepare students for an AI-enabled world. We suspect that this is because maintaining the pre-AI status quo is administratively straightforward, whereas the path towards integrating AI into higher education is still murky. This is a dangerous state of affairs because it leaves students ill-prepared for a rapidly changing job market, and vulnerable to technological unemployment. Universities, too, put themselves at risk of falling enrollment and ratings, should students abandon them in favor of forward-thinking alternatives. Researchers need to act now to determine how to update higher education in the applied disciplines so that it prepares students for AI-enabled professional practice. 

Authentic Assessments and AI Usage

In addition to imparting the broad benefits of a liberal arts education, a central goal of higher education in the applied fields is to equip students for professional practice. Instructors achieve this goal by designing “authentic” assessments—assignments and tests that closely simulate tasks encountered in real professional settings. As AI becomes more prevalent in workplaces, involving AI in assessments enhances their authenticity. 

Despite widespread agreement that authentic assessment is important, most academic debates and guidelines about AI focus on preventing its use on tests, citing fears about cheating and the difficulty of evaluating what students have learned. When students use AI to complete assessments, it is often impossible to tell if the test results reflect the students’ intrinsic abilities. This is concerning because an AI capable of passing authentic tests seems close to being able to displace workers from their jobs. If students are vulnerable to technological unemployment, then that calls into question the value proposition of higher education in the applied disciplines. 

On the bright side, we feel that universities can mitigate the risks that AI poses by adjusting their approach. Given that standalone AI cannot yet replicate human-level performance across the board, this suggests that some of the performance parity between standalone AI and AI-assisted students is an artifact of the outdated way that students are taught and tested. By updating their curricula and assessment methods to emphasize areas where humans outperform standalone AI, universities can maintain relevance in the broader educational landscape and prepare students for AI-enabled professional practice. 

How do curricula need to change? 

The first task in updating curricula is to emphasize the areas where humans outperform standalone AI in order to ensure that students are learning skills that complement rather than compete with AI. 

To accomplish this, educators will need to draw from empirical studies that compare human and AI capabilities across various fields. Although this work has begun, our current understanding is incomplete, and further research is needed to more fully characterize the relative strengths of humans and AI. Ultimately, this ongoing evaluation should allow higher education institutions to adapt their curricula dynamically as some “traditionally” human skills, such as negotiation and expressing empathy, may become replicable as AI progresses. It is worth nothing that current evidence suggests that humans still outperform AI in tasks that require genuine creativity, including the ability to reformulate problems, approach them from novel angles, and come up with original solutions. While AI excels at recombining existing information, it often struggles to glean new insights. This suggests that it is important to explicitly teach creative and divergent thinking. 

The second task is to de-emphasize instructional content that has lost value for students. There will be cases where the pre-AI way of doing a task gets superseded by newer methods, and ought to be de-emphasized or discarded from the curriculum. For example, mathematics students still need to understand logarithms conceptually, but we no longer teach them the now-obsolete method of estimating logarithms with a slide rule. As educators re-evaluate curriculum, they must identify those skills which are better shared with students as historical footnotes.

The third task is to teach students how to integrate AI into their work. Some leading institutions have begun this process. The Harvard Chan School of Public Health, UC-Berkeley School of Law, and the Parsons School of Design, for example, now offer courses or certificate programs on AI applications in their fields [1, 2, 3]. A few institutions have gone further, integrating AI into core coursework rather than teaching it as a bolt-on separate topic. An example of this “AI-first approach” is an introductory computer science class at the University of California, San Diego, that teaches students how to program with Github Co-Pilot and ChatGPT. However, such fully-integrated courses remain uncommon, especially outside of software-related fields. Although AI does not need to be part of every lesson, its use should be taught alongside professional skills for which it would be relevant. The more holistic this integration, the better. 

Finally, future professionals may need to become more interdisciplinary as AI becomes capable of doing more and more of the work for which a single specialist might historically have been responsible. For instance, if AI can handle 75% of the work in copywriting, design, and programming, then a web development professional might need to cover the remaining 25% in all three of those areas. To that end, educators should perhaps broaden the range of disciplines that they cover so that individual students can learn how to leverage AI to do the work that previously would have required a multi-disciplinary team.

Where would the curriculum remain the same?

Even as AI reshapes many elements of the applied disciplines, some aspects of education will remain constant. First, regardless of AI’s capability level, students still need sufficient subject-matter expertise to develop robust, field-specific mental models that allow them to reason deeply within their field, formulate hypotheses, ask questions, and evaluate arguments and evidence. By analogy, expert software engineers today still find some knowledge of lower-level languages like C important to their mental models of computation, even if they program almost exclusively in higher-level languages that only leverage C under the hood. A solid foundational understanding of their field is also necessary so that humans can spot where AI systems fall short, such as when they hallucinate or draw flawed conclusions.

Moreover, a solid foundation in one’s field is important for understanding relationships across different bodies of knowledge and identifying patterns across contexts. This kind of big-picture thinking is what enables humans to make unexpected connections and drive innovation. The ability to think broadly, question assumptions, and challenge the status quo—particularly when it is inefficient, repressive, or destructive—remains a uniquely human trait that education should foster. This is essential to advancing independent thought and countering groupthink.

Finally, communications skills will remain indispensable regardless of how far AI advances. When working in real-time with others, students still need to reason on their feet and articulate arguments without AI assistance. And even when students are using AI, in order to do so effectively, they will need to be able to write a specification of what it is that they want the AI to do, and AI cannot do that for them.

What are the implications for assessment?

The goal should be to design assessments that cover course content that includes the use of AI, but still require students to demonstrate learning through responses that AI cannot auto-generate with minimal guidance. Instructors could check how well a draft exam meets this goal by feeding it to standalone AI and seeing how well the AI scores. The disadvantage of this approach is that it is ad hoc trial and error. More research is needed in order to arrive at general principles that educators can use to make AI-resilient tests.

For the near-term, there are at least a few techniques for making tests AI-resilient that warrant further exploration. For example, one technique could be to give assignments that would require students to devise elaborate prompts in order to fully answer questions. Instructors could accomplish this by asking open-ended, multi-layered questions which do not explicitly specify what information would be needed to provide a complete answer. A second technique could be to provide complex factsets so that the student needs to discern what information is most important to the question. For example, a business instructor could provide a large dataset of customers’ interactions with a hypothetical company and leave it up to students to determine which metrics or measures are most important, and what action the company should take based on their analysis. A third technique could be to require students to give a live oral defense of whatever they used AI to produce. This exercise would require students to think on their feet and fully understand the reasoning behind their answers.

What is needed to propel motion on these questions?

To move forward on this issue, several key elements are needed. Higher education institutions should consider hiring dedicated “AI Integration” staff to help instructors understand AI’s impact on their field and how to revise curricula and assessments. For well-endowed universities, this could be a specialist in both AI and a particular applied discipline. For instance, an advisor on AI in a biomedical engineering department would be different from someone advising on AI in a school of architecture. For universities with a limited budget, they could implement an AI office that provides general support across departments. The specifics of this arrangement could take many forms, but it seems like a large enough endeavor to warrant new full-time jobs.

Institutions can incentivize instructors to make these changes through policy. For example, they could require continuing education in AI or provide guidance for the inclusion of AI in the curricula of applied disciplines. Institutions might also consider offering free access to state-of-the-art AI systems for all instructors and students or explore other models for reducing barriers to such AI system access, such as supporting deployments of open-source models at their institution. 

Additionally, funding for research is required to address open questions about human skills relative to AI now and in the future, assessment design in an AI-enabled context, and evidence-based best practices for curriculum redesign. Given the broad public good that this research would represent and the incentives to get this question right, universities should aggressively pursue several avenues to secure funds for this research. More work is needed to tap into resources from a variety of public and private sources, and to expand the scope of funding available through advocacy efforts or industry partnerships. 

Conclusion

Universities must adapt to remain relevant as AI changes the way that we learn and work. Educational institutions have successfully revised their teaching practices in the past to integrate new disruptive technologies such as the calculator and the search engine, and now it is time to do the same for generative AI. However, we do not believe this is simply a matter of institutional leadership dictating down a policy. This is a new category of research that needs investigation in order to establish best practices. Above, we have outlined some of the key research questions that we feel warrant more attention from domain experts in order to establish these best practices. This list of research questions is current as of Summer 2024, but it will need revision soon if AI keeps advancing at its current pace.

Leave a Reply

Your email address will not be published. Required fields are marked *