In just six prompt engineering iterations we achieved real business value with generative Artificial Intelligence in that we are now able to:
- Significantly reduce the time required to review and assess client submissions using our WHAT-WHO-WHY rubric.
- Provide a comprehensive evaluation and quality suggestions for each entry.
- Make explicit and accessible our assessment and recommendations logic that had not previously been made clear to those we work with.
- Increase our capacity to provide service.
- Quickly and easily incorporate new data and insights into our process.
While we found it doable, prompt engineering required careful attention to detail and follow-through to identify and address what worked and what did not work at each step.
Now we have a fantastic starting point from which we can easily apply our facilitation skills to deliver even higher levels of value to leadership teams.
Large Language Models such as Bing and Bard have received a lot of attention for their ability to chat intelligently (or so it seems!) about almost anything. However, there are not yet many examples of chatbots providing real business value.
At IntelliVen, we wanted to test generative AI’s ability to improve the quality of our work while simultaneously lowering costs. We are pleased to have found that chatbots can generate significant business value.
In this post, our goal is to share how we achieved business value using generative AI so that readers can build on our efforts and push our thinking (and their own thinking!) even further in this rapidly evolving discipline.
The key to what we achieved is Prompt Engineering, an iterative process by which we created a suitable and reusable prompt for our use case.
Prompt Engineering Definition
Prompt engineering is the process of designing and refining prompts to improve the performance of AI models. It involves techniques like using the right words, format, length, and parameters to coax the best performance from an AI model given its training data.
Prompt engineering can also include providing pertinent background, such as:
- The profile of the person inputting content.
- What perspective to take when assessing input.
- The profile of persons who will read the output.
- What types of output are requested.
- What indicates high quality output.
Especially when you plan to interact with a chatbot for the same purpose on a recurring basis, it is critical to invest in prompt engineering to ensure you get the highest value results.
At Intelliven, we help leaders and their teams architect, build, govern, and change their organizations. The cornerstone to our approach is to first align leaders and their teams on the definition of their business; that is:
- WHAT the organization provides.
- WHO buys what they provide.
- WHY buyers choose to purchase from the company.
PREPARATION: We first ask the leader and their top team to each independently fill out the W-W-W template with their perspective on the three dimensions that define their business. We assess individual responses for clarity and specificity and then compare responses to identify what is common and what is different between them.
For every submission we:
- Compare each response against a rubric that outlines what each element of the W-W-W should and should not contain to come up with at least three helpful pieces of feedback.
- Analyze the frequency of terms used to identify differences and similarities across submissions.
FACILITATION: Next we facilitate a session (or sessions) in which the leader collaborates with their team to align on what should and should not be included in a clear and concise definition of their business which then serves as input to virtually every aspect of their business.
It takes a consultant with significant expertise (of which there are but a few) about a half-hour per entry to internalize submitted input, apply their best thinking, and document their assessment of each submission. Depending on the size of the group, this process may take up to half-a-day per organization.
Our goal was to see how much AI could help us with preparation, which includes intake, assessment, recommendations, and documentation for each entry. The following table summarizes six iterations of prompt engineering towards this end:
We found that generative AI effectively added value to all four preparation steps (intake assessment, recommendations, and documentation), AND generated better and more comprehensive results; and it did so almost instantly.
Specifically, generative AI:
- Makes the logic we previously used intuitively explicit for our consultants and clients. In other words, the chatbot explains why it made the assessment and suggestions it did.
- Saves time and cost by dramatically reducing the effort required to assess input and document results.
- Produces a high-quality guide our consultants can use to facilitate the collaborative process with leadership teams to reach alignment on a consolidated version.
- Comprehensively applies all facets of our assessment rubric to every submission with no additional cost or effort. Previously we were content to stop after identifying just one or two salient points!
- Enables us to present assessments and suggestions with more friendly and readily received language than we tend to draft on our own.
- Integrates with the full comprehensive set of up-to-date world knowledge in its suggestions when we use a bot that is connected to the internet and pointed to content on our site.
- Easily updates response logic with new case data or when we update the assessment rubric.
Note that, even though it has great value, we do not plan to sell our prompt or to charge for its use. Rather we have integrated it into the W-W-W method to:
- Enhance the quality of our assessment, suggestions, and facilitation.
- Increase the number of people able to do what our best do.
- Dramatically lower the time and cost to do what we do.
- Increase the attractiveness of our offering to prospects.
While AI provides great value, it is important to:
- Know when to stop. When a prompt is activated more than a couple of iterations on a given input, the chatbot’s responses tend to go in circles that add little-to-no additional value.
- Check everything carefully. Many times generative AI makes up content which we have learned to interpret as indicating that “something along these lines is needed here”. For example it will make up a role in an organization for the buyer when the submitted entry does not identify one.
- Never take the chatbot’s output as a final answer. Use it simply as quality input to the collaborative process.
- Note that different chatbots give wildly different responses to the same prompt and the same bot is apt to give a different response to a resubmission. While initially concerning, we learned to consider each response as one more round of input for collaborators to consider.
- Chatbot responses offer little in the way of creativity or imagination except when it makes things up instead of pointing out gaps. Responses are simply a complete and comprehensive application of facts and rules. Creativity and imagination come from the leader and team … not the machine!
We are preparing to share soon the results of using our AI-powered WHAT-WHO-WHY assessment and recommendation engine on a real case study.
To try out our engine for yourself:
Fill out and submit a WHAT-WHO-WHY Template to run your case through our process!
- LinkedIn Learning: How to Talk to the AIs with Xavier Amatriain, VP of Engineering, AI Product Strategy at LinkedIn
- Possible Podcast Prompt and Process with Ethan Mollick [AI miniseries]
- The Power of Clarity eBook; a primer on the history and use of the IntelliVen WHAT-WHO-WHY Tool and Method