Paying Attention to LLM (AI) Gaps

“So what’s been going on here,” you might ask. Having missed our weekly roundup (don’t worry, it’s back in a few days), there’s a bit of fun which we’ve been paying attention to in respect to “advancing tech operations.” One of those has been the continued push and usefulness of LLMs for smaller tasks. Not specifically the agentic stuff you might have been reading about, but smaller, focused items which augment our thinking and processing abilities.

That said, there’s a downside to this as well which is worth paying attention to, and that would be the nefarious uses. Prompt injection attacks have come back into play (for those in the early 2000s who remember things like this with Excel, welcome back). The folks at Brave posted some important findings about how some of these browsers aren’t protecting themselves or their users from this. And its not so much eye-opening, as much as it is makes you aware that advancements also have consequences.

Some of us have used this adversarial tactic for their own advantage (for example, putting code, comments, etc. into one’s resume or social media profiles to “fool” the LLM). And it makes sense to offer offensive abilities to what usually can be postured as negative. But, as in all things moving forward, you cannnot just take a step forward without looking where you are stepping. And with LLMs, this is very, very important. Not every advanced use will end up advancing you.