On Tuesday, we had an amazing webinar with Lovework about their Branding process. Our team used Otter.ai to transcribe the recording and OpenAI playground to fetch insights.
Here is the raw text from Otter.ai and we highlighted insights by OpenAI with the colour red. We uploaded our text to the field and wrote: “summarise”.
If you want more interesting results, make your question more specific. Since our audience is quite senior, we ask AI to summarise our text for senior designers. Try generating several times to get exactly what you want. Play with the Temperature to get more boring or crazy results. Explore templates for inspiration.
You can also use upword.ai to summarise the text and find insights easily.
For the post about empathy, we asked OpenAI to generate 5 questions for leaders on how to develop the right level of empathy. Since we had questions in mind, it took us around 3 minutes to write a post using AI.
OpenAI language model seems good at generating lists, bullet points and suggesting resources, so use it for this. Have in mind that this AI was trained on sources earlier than 2020 so you won’t find much fresh stuff in it.
For some of the posts, we used copy.ai to tweak posts a bit and make them more catchy. Copy.ai suggests different options for the same text and you can choose the tone you want.
For each post, we decided to generate images and not use Photoshop or Figma to improve pictures. Ugh, this was hard for 2 reasons:
It was hard to decide what to render. DALL-E 2 won’t generate words like empathy, removing the ego, or leadership. It is too abstract. So for each abstract idea, we had to create some specific metaphors like heart, hands or puzzles.
Each of the tools created different results. For example, these are the results for “yellow matter ball in the center on the sand and yellow liquid waves going around top view Vray render”. Our team preferred DALL-E 2 but we are looking to explore other tools more.
Use template: Object + Background + Style.
It was pretty hard for us to decide on a pumpkin style and to get the colours right. After tons of feedback from our creative director and various rendering attempts, we came up with a style. We had to try different lighting, rendering software, background colours, and more.
Drawback: Unfortunately we didn’t figure out how to get exact Hex or Pantone colours, so it was a bit limiting.
Create a mood board with the images that you like and come up with keywords. In Midjourney, you can explore what other people generate and learn different keywords. For example, keywords like 'vray' give you images that look like renders, 'fuji 35mm' would give you images that look cinematic.
Once we rendered something we liked, we used it as a reference for future works so we could keep the same style. In DALL-E 2, you can use the outpainting feature for this.
Use AI as an extra pair of hands or a tool. Even though we loved playing around with various AI tools and it made creating social media content up to 10 times faster, it didn’t replace our creative team who needed to come up with directions and filter the outputs. We used Photoshop and Figma to finalise our designs, add typography and tune the colour. There is still a lot of potential in AI to explore and we definitely will be using it in the future.
Unfortunately, the browser you use is outdated and does not allow you to display the site correctly. Please install any of the modern browsers, for example:
Google Chrome Firefox Safari