Skip to main content

Adjusting our attitudes about new generative AI tools is critical to successfully integrating them into both business and educational practices. 

Most organizations have taken one of three stances on generative AI: banning it; putting it “under review;” or allowing a period of experimental integration. Outright bans and ill-defined reviews point to leadership that is either scared or in denial. Early adopters of experimental integration have much to gain from learning how generative AI can improve their processes and practices. 

We’re not ashamed to admit it: when generative AI first made its way to mass audiences with the release of ChatGPT in November 2022, we were a little freaked out. Predictive AI was nice. It could tell us what to watch next on Netflix or make suggestions for products we should add to our Amazon carts. It was helpful but not eerily so. 

But AI trained on large language models that visibly and obviously learned new things right before our eyes? This was borderline Terminator territory. 

Here’s an example: While more primitive AI tools failed the SAT, LSAT, and GRE Qualitative exams, ChatGPT passed. And then ChatGPT-4 improved over its previous version’s scores significantly! The tech is cool but the vibe can be a bit uncanny.

Despite our knee jerk reaction to tools that can learn at sometimes frightening speeds, we’ve adjusted our attitude the more we’ve experimented. There is still much we don’t know about where generative AI will take us. Early adopters and willing experimenters are the ones that will get to have a say in how this new technology integrates with our lives.

The Trouble with Bans

Bans prevent discussions about the proper use of generative AI in different organizations. When bans are in place there is no way to experiment and highlight ethical concerns or become familiar with the limitations of these tools. Conversations about academic integrity go unspoken and concerns about proprietary data sharing are never aired.

Organizations that outright ban the use of generative AI have valid concerns. Schools worry about plagiarism and cheating. Free speech proponents and journalists fret about disinformation being presented as truth. And everyone worries that workers and students will suffer from weakened critical thinking skills. 

But bans themselves are more likely to cause harm. As students and workers with access to generative AI learn to use it as a valid tool, the AI have-nots will suffer from widening equity and performance gaps. Students and workers who are never exposed to AI are missing out on the opportunity to learn vital new skills that will keep them marketable throughout their careers. They’re also missing as voices in the shaping of how these new tools are used. 

The Right Way to Review

Organizations that leave generative AI tools in a perpetual “review” phase are like parents who say “I’ll think about it” every time their teenager asks to hang out with friends. The request is pretty reasonable but the lack of decisiveness around the answer shows weak leadership from the top. In the same way the teen will worry whether or not their parents really trust them, employees or students at your organization will also wonder if they have the faith of their leaders. 

What does all this perceived distrust lead to: sneaking around! Your teen’s desire to be social is natural and irrepressible. Your employees desire to work with more efficient tools that make their work both easier and better is….maybe not quite as irrepressible but still strong! Just as a total lack of certainty around their social lives will lead a teen to sneak out or break the rules, so will your employees sneak around with AI if their official requests never leave the “review” phase. 

If your organization insists on a review phase, make a specific plan for your experiments and tool assessments. In your caution, remember that AI can be controlled. Decide which tools you want to assess. Outline what sources you are ok with feeding into the LLM and what sources you are ok with it drawing from. 

One great experiment is to take a lesson you’ve taught or a problem you’ve encountered frequently and give it to a generative AI tool for refinement. Your familiarity with the problem will tell you if the tool is going off track, solving or addressing issues in the same way you would do, or offering something innovative. 

Keep your assessment period brief and be very clear about the rules around what is and is not allowed at your organization. Further periods of assessment might be necessary, so be prepared to continue the process if your organization requires tight controls. 

Early Integration, Early Success 

Early adopters of any new technology often stand to gain the most. Working through the period of friction that comes with new technology stimulates problem solving and improves overall collaboration across teams. We all want to see our employees saying “look what I did!” with a new piece of tech and cross-training their coworkers to get them up to speed. 

If you’re willing to cut your organization loose with generative AI without an initial period of banning or review, you still need to acknowledge a few points as you’re getting started. Remind your employees of the guiding principles of your organization and the goals you’re trying to achieve as a school or business. Any uses of generative AI need to fit within those principles and goals. If there are certain tools you know you don’t want to use, be clear about those. And if there is certain data you know you don’t want uploaded to generative AI tools, be clear about that as well. Sales data or proprietary code may fall in this no-go zone. 

Providing training on AI ethics is also worth considering. Large language models are built around data that already exists. Aggregated data can lead to biased, discriminatory output. Using AI also raises concerns about plagiarism and fake news. AI can also breach copyright laws and infringe on intellectual property rights. All employees need to be aware of these risks. 

Be sure to set up a system for capturing the successes that your teams have as they experiment with generative AI. Developing a “cookbook” of high-quality prompts for your specific needs should be one of the end goals of your early explorations. Cross-training may happen naturally, but it would be a good idea to build somewhat of a formal process for idea-sharing too. 

Embrace the New

Now that we’re over our initial fears about generative AI becoming actual Skynet, we’re all in on experimentation. As a cybersecurity firm, we know that bad actors are using AI to identify gaps in our customers’ security perimeters. Unskilled hackers are finding success in their attacks because they can now buy AI tools to do the hard work. And even nation-state threat actors are using AI to infiltrate their enemies’ systems and spread disinformation

We have no choice but to learn, test, and understand what generative AI has to offer in the field of cybersecurity. We challenge all of our customers and colleagues to keep a similarly open mind as we enter a world where AI is yet another tool for the improvement of our work and learning. 

Want to work with us? Reach out to Asylas at 615-622-4591 or email info@asylas.com. Or complete our contact form.