Hacker Newsnew | past | comments | ask | show | jobs | submit | rugina's commentslogin

> partially because Intel didn’t see a market

I saw some articles saying that Intel saw the market very well, they just could not deliver and rather than admit that, they claimed the CEO decided wrong.


Both were probably true to some extent but I doubt they wouldn’t have figured out a way to execute given the huge opportunity.

The mobile CPU market worth is a meaningful chunk of Intel’s overall current market cap and they’re not participating.


The English word bit has the same meaning in French as word Coq has in English.


I am French, I know :)

But in my experience this simply causes some giggling in the classroom for a few days and that's it, which is not the experience I have seen recounted regarding Coq in the US.

There are multiple differences between these cases:

- The word "bit" is mostly used when speaking English, not in a French sentence (we use "octet" - I don't know the history here but I wouldn't be surprised that this is specifically because of the French meaning of "bit"). Coq, being the name of the tool, is used as-is in English sentences. [This is wrong, I somehow confused bit and byte here]

- Even when used in a French sentence, the gender is different ("un bit" vs "une bite"), removing ambiguity

- Bits are just one fraction of the curriculum, not the name of the tool used in every single lesson of the course

I will refrain from commenting further on this topic, as it has been rehashed many times already and distracts from the work on rust-to-coq translation.


Are you sure? I'm not French but according to French wikipedia "octet" means "byte" not "bit": https://fr.m.wikipedia.org/wiki/Octet


Sorry, you are right of course! I somehow confused bit and byte, I have updated my comment. Thanks for pointing that out.


I think NM translation was broken all along. Not in the neural network part but in choosing the right answer. https://aclanthology.org/2020.coling-main.398.pdf


Since LLMs are loosely based on NM models, it seems research on newer sampling methods like Mirostat might help here.


We, humankind managed to get a good optimisation for this problem by using spaces between words. When trying these algorithms for searching a word in a string of text, I was surprised how little they could improve vs just skipping to the next word.


I think you would be surprised about how much of a performance hit that would be over existing state of the art. The thing you are missing is that the human visual system evaluates existence of spaces in parallel, a single threaded algorithm would need to check every character to see if it's a space, in addition to checking the letters of the search string. Also that fails to generalize to languages without spaces, search strings with spaces, etc.

Also if you look at algorithms like Boyer-Moore, they effectively DO skip spaces, but do so in a manner that is language / content agnostic. (https://www-igm.univ-mlv.fr/~lecroq/string/node14.html)


By “improving” you mean changing the problem definition: Word search is a completely different problem.


Given that the GitHub repo is almost three years old, I expect Martin Fowler to already have Dada Patterns, Refactoring in Dada, Dada Distilled, Dada DSL and Dada Best Practices ready to publish.


They don't allow development on GNU/Linux.


This is really normal for most small companies with good security posture, honestly. The company will pick one platform where endpoint management is functional, and require it. Code and secrets can't live on machines without endpoint management.

If the productivity/hiring/morale hit from requiring one platform becomes too great, then they'll get IT to figure out how to manage other kinds of endpoints. But in a small company, trying to manage disparate endpoints across multiple OSes is hard and expensive, but allowing corporate secrets on unmanaged endpoints is also a bad idea. So, this is the trade-off.


> This is really normal for most small companies with good security posture, honestly. The company will pick one platform where endpoint management is functional, and require it. Code and secrets can't live on machines without endpoint management.

What is "endpoint management," in layman's terms? My corporate laptop has 2 different "endpoint manager" applications running (and about 30 scripts that run in task manager). What are these things doing for them?


Endpoint management are backdoors that allow IT to monitor every file on disk, every network connection opened, every program run, and every action taken on a company-owned workstation, as well as allowing full control over the system including installing and removing programs; creating, editing, and deleting files; viewing what's happening on the screen; and shutting things down entirely if desired.


Piecemeal response, but endpoint managers are really there to ensure:

1) That the device is compliant with whatever security standards (AV is running, no weird user accounts that are admins etc;)

2) That if the machine is lost || fails to check in: it is wiped.

3) That if security standards change; those changes can be rolled out.

4) That activity on the device is somewhat logged, not to great extent but: Login Events (and what factor was used), if Admin elevation was called; if a strange executable was executed. etc; These logs are only useful in certain circumstances and I've never seen anyone actually use them outside of arbitration.



Rebranded antivirus + corporate compliance spyware


really?


Yes see https://basecamp.com/handbook/managing-work-devices#mobile-d...

They also don’t allow Windows or iOS.


Which is perfectly reasonable take (especially former)


Tried it and it feels slow. I opened a Rust project and after a long wait to index crates, I opened a file and deleted a commented line. It took a few seconds to display the annotations again.

While waiting, I did some work in neovim.


FWIW, I've had the same experience with CLion at my previous job. Syntax highlight lagged 30+ seconds behind. Autocomplete was unusable, because it took almost a minute sometimes (for method names!!), or would pop up with stale/irrelevant results. Everything in the UI felt sluggish. Used an ungodly amount of RAM. Plugins would randomly stop working and require me to reinstall them to work.

The only thing going for it was that the debugger worked better than gdb (although I think it uses GDB unde the hood, but probably does something better than when I manually fiddle with gdb)

VS Code is sometimes a bit sluggish when handling input. Don't know why, and it only happens on my Macbook Air, (so might be thermal throttling?) but it happens rarely.

The nice thing about Code is that it's also easier to work across multiple languages without having to switch tools.

However, it does say it's still in preview, so it might get better?

Also, the UI looks radically different (and nicer) than what CLion looked like, so maybe they have a new UI framework that fixes some of the problems I had with CLion


Tokio uses a pool of threads for disk I/O because it uses the synchronous calls of the operating system.


> I’m curious though… are people using it in production much?

Here's Daniel beating his own drum: https://archive.fosdem.org/2017/schedule/event/curl/


> the noise level of the keyboard made it difficult for my colleagues to focus

Had the same issue in the past but nowadays everyone is wearing noise cancelling earphones


What's the point of going to the office if everyone is hiding from everyone else?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: