Hacker Newsnew | past | comments | ask | show | jobs | submit | barisozmen's commentslogin

Even if the term 'variable' has roots in math where it is acceptable that it might not mutate, I think for clarity, the naming should be different. It's uneasy to think about something that can vary but not mutate. More clear names can be found.


Hey OP, please continue this series! I'm in the same ship with you. I've been a Python developer for years, but recently start learning Ruby after I discovered its elegance. Your post here well resonates with me: https://tech.stonecharioteer.com/posts/2025/ruby/



Thank you! I will be writing more. I'm writing one about Symbols right now.

A repost of this thread hit the front page, I'm so happy!


You can reduce anything happening on the computer to arithmetic operations. If you can do additions and multiplications, then it's turing complete. All others can be constructed from them.


While correct, that doesn't answer the question at all, though. If I have my address book submited into an FHE system and want to sort by name - how do you do that if the FHE system does not have access to cleartext names?


You can do that by encrypting the names. You send encrypted names to the FHE-server, and then the server does necessary sorting computations on it.

The point of FHE is it can operate on gibberish-looking ciphertext, and when this ciphertext decrypted afterwards, the result is correct.

Indeed, there are those working on faster FHE sorting: https://eprint.iacr.org/2021/551.pdf


When comparing two ciphertexts A,B a FHE sorting function will output a sorted pair of two new ciphertexts:

E.g. FHE_SORT(A,B) -> (X,Y)

where Dec(X)<Dec(Y)

But without decoding, there's no way of knowing whether X (or Y) comes from A or B.

Source: II. D of https://eprint.iacr.org/2015/995.pdf


It’s not that simple. The client has to send the server the comparison function.

To do anything practical the server usually needs to provide the client with gigabytes of per-client-key encrypted seed data.


Honestly it breaks my brain as well. I just have to take it on trust that it apparently works.


An FHE Google today would be incredible expensive and incredibly slow. No one would pay for it.

The key question I think is how much computing speed will improve in the future. If we assume FHE will take 1000x more time, but hardware also becomes 1000x faster, then the FHE performance will be similar to today's plaintext speed.

Predicting the future is impossible, but as software improves and hardware becoming faster and cheaper every year, and as FHE provides a unique value of privacy, it's plausible that at some point it can become the default (if not 10 years, maybe in 50 years).

Today's hardware is many orders of magnitudes faster compared to 50 years ago.

There are of course other issues too. Like ciphertext size being much larger than plaintext, and requirement of encrypting whole models or indexes per client on the server side.

FHE is not practical for most things yet, but its venn diagram of feasible applications will only grow. And I believe there will be a time in the future that its venn diagram covers search engines and LLMs.


> If we assume FHE will take 1000x more time, but hardware also becomes 1000x faster, then the FHE performance will be similar to today's plaintext speed

Yeah but this also means you can do 1000x more things on plaintext.


But think of the children?


Answer to his though experiment: Yes, I believe a sufficiently advanced AI could told us that. Scientists who have been fed with wrong information can come up with completely new ideas. Making what we know less wrong.

That being said, I don't think current token-predictors can do that.


My read of this was that AI is fundamentally limited by the lack of access to the new empirical data that drove this discovery; that it couldn't have been inferred from the existing corpus of knowledge.


Recent LLMs have larger context windows to process more data and tool use to get new data, so it would be surprising if there’s a fundamental limitation here.


"One technique for making software more robust is to minimize what your software depends on – the less that can go wrong, the less that will go wrong. Minimizing dependencies is more nuanced than just not depending on System X or System Y, but also includes minimizing dependencies on features of systems you are using."

From http://nathanmarz.com/blog/principles-of-software-engineerin...


Completely agree. It's one of the most important skills to know which dependency is good and which is bad.

My two cents. If a dependency is paid, than it is usually bad. Because the company providing that dependency has an incentive to lock you in.

As another point, "dependency minimalism" is a nice name for it. https://x.com/VitalikButerin/status/1880324753170256005


As well, paid dependencies usually only have one source of support, and when the company goes under or stops supporting the product you are in rough seas.

Given very few companies last forever, you have to assess if the trajectory of your project would be impacted by being coupled to their ability to support you.


For sure, this goes into the terrain of acquisition though, for which there are long-running procedures and assessment criteria. Long-term support / company longevity is one of them.

But likewise, these companies have the incentive to look like they have long-running success and can be relied upon for years / decades to come.


> when the company goes under or stops supporting the product

Or, even worse, gets acquired by someone like Salesforce


Exactly. That's another point


I've experienced some bad paid dependencies forced on us by a non-engineering team. I've had a few good experiences with "open core" kinds of dependencies that are widely used by the community, e.g. sidekiq, and therefore less likely to suddenly vanish one day as they would almost certainly get forked and maintained by others in the same boat.

The upside of paying for something is that, assuming the owner or company is stable, I don't have to worry about some unpaid solo maintainer burning out and never logging back in.


> assuming the owner or company is stable

and continues to be stable for the lifetime of your product.


> My two cents. If a dependency is paid, than it is usually bad. Because the company providing that dependency has an incentive to lock you in.

Vendor lock-in is a risk for both purchased components and FOSS ones where the organization is unwilling to assume maintenance. The onus is on the team incorporating third-party component(s) to manage their risk, identify alternatives as appropriate, and modularize their solutions.


Developers would say this and then deploy to AWS Lambda or Vercel with a straight face


If a dependency is paid and it is bad, then maybe you just aren't paying enough for it.

If my code has a dependency then I want there to be people whose feel it is their job to support it.

Either there have to be enough people who are paid to support it, or there have to be enough people whose self-worth and identity is so wrapped up in the code that they take it as a point of honor to support it.

I don't need a company that's likely to abandon a code product and leave me hanging. I also don't need a free software author who says "bugs are not my problem - you have the source, go fix it yourself." If those are my choices, I'd rather write it myself.


> I want there to be people whose feel it is their job to support it.

their "feeling" will not change reality, which might conflict. For example, a specialized database vendor would prefer that you cannot move away, and even if they feel like they want to support you, there are other economic factors to consider which could override this.


https://opensourcemaintenancefee.org/ uses payments as an incentive to keep projects going, so dependencies can be updated. .NET Rocks! interviewed them https://www.dotnetrocks.com/details/1948


I think the idea is that if you are paying, the dependency needs to implement some sort of open standard/interface to which there are at least one other implementation you could move to. The vendor cannot lock you in with this requirement, since you always would have the option to move (despite it being a bit expensive, it's just there as a threat).


I started doing a relevant project https://github.com/barisozmen/securegenomics . Because I believe 23andMe event will result in people to be more wary of sharing their genetic data, and we need ways to make people able to contribute in genetic research without exposing their data.


In what you show, people encrypt their genome before uploading data on some server, and then scientists can work on the data.

How are scientists able to work on encrypted genomes?


Yes, it's by homomorphic encryption as @vintermann mentioned.

That being said, scientists can implement their own protocols, and use whatever technique they want. For example: https://github.com/securegenomics/protocol-alzheimers-sensit....

It's that our platform makes federated computing + homomorphic encryption analysis easy, but protocols are customizable.


Homomorphic encryption, presumably? It's not impossible. But I also think it's overkill. Also, I don't know of good open source software that lets me do the kind of analysis I want even on non-encrypted data.


> Homomorphic encryption, presumably?

That would not be very comforting, this would mean that even encrypted, we can know stuff from the encrypted form, which kinds of defeat the purpose unless I'm missing something.


It was a great read! Just curious, how stable has your BGP setup been since deployment? Did you need to do a lot of maintenance, or was it like fire and forget?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: