top of page

Search behavior: of brains and biases

  • Writer: Myriam Jessier
    Myriam Jessier
  • Oct 4
  • 2 min read

Updated: Oct 9

Generative search changed the interface; it didn’t change human nature. The way people search amplifies brain shortcuts, so make facts very clear, visuals machine‑legible, and claims emotionally obvious at decision time.


We had an in-depth discussion on Search with Sean. We being me, Myriam Jessier and Giulia Panozzo, a neuroscientist specialized in search.



Noisy input, limited bandwidth, fast heuristics. 


Brains are noisy signal processors running mostly on autopilot. Search behavior is guided by shortcuts, not careful analysis, even when the output looks smarter and more polished. Generative systems align with this reality: they handle the boring parts, reduce friction, and make messy requests feel neat. Vague requests are decrypted, options summarized, answers stitched together.


Cognitive biases shape the click. 


Text on cognitive biases, with colorful lines and a brain icon on the left. Text reads, "LLMs have hallucinations and humans have cognitive biases."

Brains reward what feels easy, familiar, and distinct. Bias shows up at every step. People anchor on the first number seen, follow authority cues like certifications and titles, and accept consensus signals from reviews even when they’re thin. LLMs mirror this and often sound correct instead of being factual. Create for how human brains actually work: write at a reading level that feels effortless, put the most important claim first, and make one thing unmistakably different from the competition. Bias isn’t a bug; treat it as a constraint to engineer for.



What should you know?


Text on AI research shortcuts with hand holding marker illustration on orange background; lists tasks like defining info needs and scanning.

  • “Query‑fan,” helps AI infer context, and build answers rather than just retrieving links, making results feel more like a reasoning layer than a directory.

  • People offload tedious steps: defining problems, keyword foraging, gathering options, comparing pros/cons. They do it via multimodal prompts involving voice, image and text search. 

  • Familiarity bias hardens: once a method “works,” it sticks. Tools get used because they feel familiar and reduce effort, not because users audit source quality each time.

  • Visual‑first microtasks: snapshot identification, error diagnosis, fit checks, and “like this but in blue”. move upstream, so images, labels, and PIM data must be machine‑readable and sentiment‑aware.


Fluency bias makes short, concrete claims feel truer for humans and for LLMs that mirror human judgment. This is why plain language wins, and why we talk about chunking and semantic triples. 


Visual: data, not decoration. 


Search can start with a camera, a keyboard, or a voice command. People snap labels, parts, rooms, and error screens and ask for identification, fixes, and alternatives. Machine vision reads what’s in the frame: text, objects, layout, even mood. If the image is clear and the label is legible, the brand gets a fair hearing. If not, the model guesses.


Optimize for visual search behaviors


Supermarket aisle with barbecue marshmallows, grill brushes. Text: "The Co-occurrence Audit (aka the manly marshmallows)." audit what your product is next to in photos because it gives a lot of context.

  • Use high‑contrast, OCR‑friendly fonts.

  • Keep product angles consistent.

  • Stage lifestyle photos where key objects co‑occur so the model infers use case and audience at a glance.

  • Make the photo and the copy say the same thing.

  • Design for the in‑aisle moment: put the decisive claim where a camera can see it. Use short attribute strings that survive glare, motion blur, and grayscale.


Visual clarity reduces cognitive load and aligns with how machine vision parses scenes.


The future of search is fast, heuristic‑driven, and biased toward what’s easy, familiar, and emotionally resonant. 

Brands should engineer content and packaging so machines and humans can unambiguously recognize, extract, and trust what matters, then reduce friction across the journey with plain language, visual cues, and credible signals at the exact moment of choice.



 
 
 

Comments


bottom of page