A constant thing with creating new software seems the tendency to automate things people can do easily while leaving the more complicated stuff for people to do (also The Irones of Automation). We build software we can build instead of building stuff that we should be building. The same seems to be true with Generative AI.
For example, since these tools make it easy to make summaries out of text, an easy idea is to give them a meeting transcript to turn it into a summary. Presumably making the time spent on meetings by knowledge workers more valuable. But this assumes that a generative AI can create information (summary) or even knowledge (?) from data (transcript). Which doesn’t seem to be true, the summary generated by the gen AI is still just data that will need a human editor to be turned into information. Of course there might be trivial cases where the gen AI can create a more useful summary, but maybe the conversation should be, do we need those meetings at all?
I understand (and kinda share) the excitement with the newer and newer versions of these tools, but somehow they are not getting that much better and the promise of a perfect assistant is always a version away.
Comments