Various forms of artificial intelligence have been around for many years, going as far back as Google’s algorithms for page ranking. However, in the past few years, the introduction of large-language models (LLMs) — machine learning models built to take in and produce natural language as input and output — has brought AI into the mainstream. LLMs have presented an interface that only requires its software engineers to be capable of reading and writing text, which contrasts to other forms of AI that take narrower forms of input and output and conceal their involvement. Thanks to OpenAI’s free and accessible ChatGPT line, anyone with a device and internet connection can consult a jack-of-all-trades rapid-response “ask me anything” tool that yields decent results for many queries.
As a result, technical fields like software engineering have employed AI to facilitate productivity. IDEs use some form of AI for coding assistance and autocomplete, ChatGPT is often asked to “write code that does [some feature/task] for me,” and so forth. As a result, AI has also made its way into classrooms. Sometimes it would manifest only in students that use it for their learning/assignment purposes, while other times instructors would integrate it into their course with either remarks on it or with assignments dedicated to it (like this one for example).
AI is not perfect though, and it’s often the case that more technical and niche topics and requests tends to come with more inaccurate results. Additionally, frequent reliance on AI may cause individuals to have poor fundamentals, which may affect performance and skill with software engineering tasks down the line. Below, I’ll document my personal experience with AI in my ICS-314 class, briefly explaining the different aspects of the class with my personal AI usage (or lack thereof).
Roughly speaking, ICS-314 has the following top-level applications of AI:
Note that there’s overlap between coding and the WODs and the final project, but the latter two represent high-level applications of code that have different contexts.
As stated earlier, WODs represent timed programming assignments. There are “experience” (another word for “homework”) WODs which function as repeatable homework assignments, in-class practice WODs that aren’t graded, and in-class WODs that are graded.
I will say though that when things aren’t making sense or there is nowhere to go, AI-generated code could very well be better than none at all. That only occurs though when I am completely lost, and I’ve been fortunate to have not been so far lost to that degree.
We’re sometimes asked with writing technical essays (like this one) for our online portfolio on certain topics — the most recent one being design patterns. In those cases I’ll sometimes use AI help me brainstorm topics, but I’m hesitant to use its words directly, as I feel that it might not reflect my own personal tone.
As we heavily relied on the Next.js framework, which made use of concepts that felt foreign to my group members and I, we would occasionally use it to generate code samples or explain topics like many-to-many relations in our database in our final project. (One such discussion in one of our project issues can be found here).
As referred to in our final project, I’ve used it for concepts that were very foreign to me, but that hasn’t happened outside our final project, since reading online content would be sufficient for me to understand what’s happening in most cases. Overall, I wouldn’t use AI to learn a concept or tutorial.
In particular, this involves asking and answering questions.
I haven’t used AI for anything aside from those points mentioned earlier.
Due to my selective usage of AI, I feel that it has had a minor but positive effect on my learning experience. In particular with LLMs like ChatGPT, it is a well-rounded, fast, and mostly intelligent “buddy” whom I can go to when I have no idea where to start.
Overall, it’s had a small but positive impact on my learning and development.
AIs in general already have applications in predictive/processing models like page ranking, image generation, and weather prediction. In the case of LLMs in particular, people have used it for all forms of help and assistance in and out of the classroom, and companies have created chatbots out of LLMs. AIs that specialize in specific areas tend to do really well with a disproportionately low error rate, but LLMs which are generalized run the risk of providing incorrect or harmful information, such as this incident in which a chatbot suggested a child harm their parents. As such, AI shouldn’t be applied to a problem just for the sake of it.
Within the course, some people have an overreliance on AI that may hamper their learning. There’s also a caveat for instructors in which students may employ AI without disclosing their usage. However, AI is simply a different, more convenient form of plagiarism, and embracing it reduces this area of potential academic dishonesty while potentially moving students to non-plagiarizing forms of aid. By treating AI as a tool to be used in select cases with caveats, there is an opportunity for AI to be a net positive by reducing frustration and improving development speed for individuals that benefit from it.
AI-enhanced approaches add a form of convenience and anti-frustration, as they offer a path to getting a most-likely functioning result very quickly. However, this comes with a downside: software engineers may end up having AI do most of their work, which can impair skill development. This is highly dependent on each software engineer, however. We can consider engagement, knowledge retention, and practical skill development, for example.
In the above cases, AI adds opportunity for benefits and drawbacks in the classroom. When applied well and appropriately cautioned against, AI can yield a significant increase in output and satisfaction. However, if that’s not the case, then it can severely reduce the learning ability of students, which will in turn produce poorer software engineers.
At the time of writing, GitHub Copilot provides free access to students. My personal concern is that this would encourage the issue raised above: students that use AI freely and carelessly would come out being worse software engineers than otherwise. The main challenge with introducing AI would be training individuals to avoid overusing AI to the point that they atrophy their skills. If executed well, however, then students may find themselves more satisfied with their schoolwork and perform well in their class. This may cause more people to pursue software engineering, which will in turn produce more and better software engineers. There aren’t many areas of improvement in ICS-314 since the instructors have taken care to integrate AI in a way that warns students against overuse. Other classes may do well to integrate AI similarly, however.
Overall, I find that AI was appropriately integrated in ICS-314. There will always be students that overuse it to the point that they atrophy their skills, but the majority of students that use it appropriately may find themselves in good standing at a fraction of the effort and struggle had they not had access to AI. I personally didn’t make extensive use of it, however.
As each assignment came with a required disclaimer on AI use, I recommend that the instructors collect data on WOD grades, final project grades, and AI usage frequency to see if there’s a correlation between the first two and the latter. If AI has a positive correlation with WOD grades but a negative correlation with the final project grade, then there’s an implication that WOD assignments are too simplistic and may lull students into a false sense of security when using AI excessively. Doing so may further improve ICS-314’s already-excellent AI integration into the course.