Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a lot of negativity about the way it is used I think.

Most people will agree that LLMs are pretty neat, but now instead of every startup being "like Uber but for ..." they are "like chatGPT but for ...".

Everyone is trying to chuck AI into their products and most of the time there is no need, or the product is just a thin fine-tune over an existing LLM model that adds essentially near-zero value. HN is fairly negative on that sort of thing I think (rightly so IMO)



I think a major problem that is going to become more and more obvious is that AI is actually pretty expensive compared to good old deterministic computing. If there's a way to solve a problem without resorting to sending an inference request to a gpu cluster, we should do it that way. Otherwise you're wasting electricity.


People said that about virtualized code, but then computers got 100x faster and now we're running 10 megabyte web apps in a 500 megabyte client to display a simple page of text, and it still loads acceptably fast.

The AI algos will get 100x faster through a combination of hardware and software optimizations. Then, deterministic vs AI will mean the unnoticeable difference between displaying some info to the user in 0.001s vs 0.1s. Then, AI will become the default.


I'm not sure if this actually correct. Performance increases were reliable and consistent for a long time but we're reaching the physical limitations of Moore's law. Unless you have new physics or new models of computation, we might reach an actual speed limit this decade when the transistors are limited but the size of atoms.

I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.


New models of computation are a given, and improved application-specific circuits for the most widely-used models are also a given (I believe current models run mostly on enterprise GPUs). Together these could easily make AI models 100x more efficient even without any advancements in the underlying chipmaking processes.

> I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.

For high-assurance apps, I agree there will always be a need, sure. Of course, these high-assurance apps will be supervised by AI that can inspect it and raise alarm bells if anything unexpected happens.

For consumer apps though, an app might actually feel less "random" to the user if there's an AI that can intuit exactly what they are trying to accomplish when they perform certain actions in the app (much like a friendly tech-savvy teacher sitting down with you to help you accomplish something in the app).


You have a lot of faith in this ai stuff. It's not magic.


AI is already considerably more knowledgeable and easier to communicate with than the customer service representatives I interact with day to day. Interacting with an API through ChatGPT, I would have a lot more faith that my inquiry would be solved given the tools available at that customer service tier.

It's only been three years since AI Dungeon opened my mind to how powerful generative AI could be, and GPT-4 blows that out of the water. Whatever gets released three more years from now will likely blow GPT-4 out of the water.

AI is already considerably smarter than the dumbest humans, in terms of its ability to hold a conversation in natural language and make arguments based on fact. It's only a matter of time before it's smarter than the average human, and at the current pace, that time will arrive within the next decade.

All useful technology improves over time, and I see no reason to believe AI will be any different.


This was the gist of my PhD, a deterministic algo to replace a wasteful genetic (evolutionary) algo. It was multiple exponentials less wasteful


Show us the paper that sounds sick


I'll do you one better

https://github.com/verdverm/pypge

https://github.com/verdverm/go-pge/blob/master/pge_gecco2013...

The reviews had awesome and encouraging comments


Let's zeroth-order a single GPT-4 query as using 0.01 kWh (which is probably massive overkill for most queries but we'll roll with it).

Let's high ball US residential electricity prices are about 25¢ per kWh. So 25¢ of electricity gets us 100 GPT-4 queries. $25 gets us 10_000.

Let's low ball average US developer salaries at a cool $100_000/yr. 50 40 hour weeks in a year makes 2_000 working hours makes $50 per hour. So with our very generous margins all working against us, a US developer would have to be making 20_000 GPT-4 queries an hour, or a little over 5 per second, in order to end up costing in electricity what he is making salary-wise.

I have no real point to this story except that electricity is much cheaper than most people have a useful frame of reference for. My mom used to complain about teenage me not running the dishwasher at full load until I worked out that the electricity and water together costed about 50¢ a run and offered her a clean $20 to offset my next 400 only three-quarters full runs.

Your bonus programming tip: Many programming languages let you legally use underscores to space large numbers! Try "million = 1_000_000" next time you fire up Python.


I actually would have guessed a full load dishwasher would cost less than that, maybe 15-20 cents.


More expensive to run but cheaper to write.

Engineers are expensive, so actually the cost/benefit analysis is a little more complex and different problems will have different solutions.


The proliferation of extremely expensive algorithms isn't necessarily good. A lot of ink has been spilled about how much useless work crypto does. We should consider the impact of AI on the total computational resources of the species carefully.


I think that's why there's a big focus on its ability to write code: Spend the gpu-cluster cost once, generate code, run that code on tiny instance. Need to make changes? Warm up the cluster...


I agree, but then I expect the major benefit of current AI will be in providing reference solutions to previously intractable problems - it'll be much easier to develop more deterministic, classical / GOFAI methods of solving those problems once we have a wasteful but working solution to play with and test against.


For now it is. If it continues to be the best way to solve problems, the cost will drop with time




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: