Hacker Newsnew | past | comments | ask | show | jobs | submit | ritwikgupta's commentslogin

This is about changing the way FedRAMP accreditation is done for any cloud service, like Box (or a new SaaS that you may create tomorrow). The FedRAMP process requires you go through a certain set of audits, meet a certain set of standards, etc., in order to be approved to host CUI (IL4/5) or SECRET (IL6) information.

Normally this can take a lot of time and monetary investment. On one hand, these processes encode cybersecurity best practices. On another hand, it keeps new companies out of the market.

It seems this effort is doing away with a lot of those processes. I hope the level of compliance stays the same.


IL 4/5/6 actually add a bunch of additional controls and parameters on top of standard fedramp baselines


I'm pretty sure IL4/5/6 are all outside the scope of FedRAMP


Public trust is not a security clearance; it is simply a more involved background check. A security clearance is only granted after a T3/T5 investigation and adjudication of the request. The SF312 NDA signed in order to receive your clearance does not expire.


With the recent veto of SB 1047, it becomes even more important to ask why these proposed policies are lacking. We suggest that all modern AI regulations are overly broad and flawed because they miss the most important part of AI capabilities: data.


Do you have an example of this workflow? Are you developing outside of XCode on a non-macOS platform in Swift and then essentially compiling and packaging using GitHub Actions?


Fascinating. The massive variance in the percentage of chickens that prefer roosting off the ground is interesting. I wonder what environmental pressures drives this decision.


Fox and maybe raccoon are the driving factors where I live. I have also seen cats going for younger smaller chickens. If the owner offers a safe shed which is closed up by night, there is no need for the chickens to rest in trees. They will voluntarily enter the shed in the evening.


You are grossly under-estimating the ability of the FBI’s cyber forensics teams to discern whether or not data was planted maliciously or produced overtly, as well as under-estimating the ability of the courts and a jury to understand when someone is willingly producing CSAM versus accidentally being in possession.

This prosecution is the first of its kind for the DOJ. It is highly unlikely that they would pick this case to take to trial if there was not certainty about the actions the perpetrator engaged in.


This case is not about production, only possession.


This is a misinformed and incorrect take. The PROTECT Act of 2003 [0] makes it illegal to possess CSAM that is generated by superimposing faces of minors onto sexually explicit imagery, or vice versa.

This bill predates generative AI models by decades. There is no need to engage in conspiracy theories here — the law is clear that this kind of imagery is illegal.

[0] https://www.congress.gov/bill/108th-congress/senate-bill/151


It's not misinformed or incorrect. I'm aware of the current law and how it stand in opposition to the first amendment. Reread my post.

> There is no need to engage in conspiracy theories here

Please do not mischaracterize my post in such a light. There are no conspiracy theories here.

There are ongoing, completely public, campaigns by both the Executive and Legislative branch to regulate access to generative models. There is a reason this press release vaguely uses the term "deepfake", which is completely distinct from dragging and dropping a minor's face onto an adult's body. Whether that reason is deliberate or negligent, it still serves the greater purpose.

The debate over access to generative models with respect to CSAM has been hot for a while now, to ignore that debate and characterize my post as perpetuating conspiracy theories is just disingenuous.


1. The PROTECT Act provisions have repeatedly been upheld by both appellate the Supreme Court as constitutional as long as the CSAM in question meets the Miller or Ferber standards. Either the law is constitutional, or you’re proposing that the courts are illegitimate, the latter of which is conspiratorial.

2. You are right that there is a campaign to limit access to open source generative AI models, but it is not an initiative led by the government. Companies such as OpenAI, Anthropic, and Google are leading the charge when it comes to emphasizing the danger of open source models and are lobbying every day to limit access. The executive and legislative branches are following suit with what industry executives tell them because they are deferred to as experts.

Industry policy teams have invented vague, ill-defined terms such as “frontier models” and equate these models as having the same power as nuclear weapons. They have a vested interest in being the sole controllers of this technology.

If you want to counter governmental efforts to limit access to such models, start by countering the FUD pushed by industry in this space.


> The PROTECT Act provisions have repeatedly been upheld by both appellate the Supreme Court as constitutional as long as the CSAM in question meets the Miller or Ferber standards. Either the law is constitutional, or you’re proposing that the courts are illegitimate, the latter of which is conspiratorial.

I'm sorry, but it is absolutely not "conspiratorial" to suggest that the judicial system is compromised. You have to be living under a rock to not understand that all three branches of government are effectively compromised to party-line political agendas backed by corporations, NGOs, etc.

Our forefathers would have spit in disgust at the idea of the bill of rights being perverted to the point that drawing the wrong lines on a piece of paper and keeping it in one's own home calls for removing one's liberty and placing them in prison. And this extends to modern technology such as image editors. And I stress, we are talking about production/possession, not distribution. There is actually a case for restricting distribution of such material.

Generally speaking, only a fool could look around at the state of the US government and say, "the laws are just and anyone who questions their justness is conspiratorial." That is textbook gaslighting, whether you intend for it or not.

> You are right that there is a campaign to limit access to open source generative AI models, but it is not an initiative led by the government.

Again, calling bullshit. [0]

Our government is in the business of staying in business, at the expense of individual liberty. This is well established going back decades. I'm not even going to argue that point with you. And because of this, they will absolutely treat foundational models with the same playbook as cryptography in the 90's if they feel like it's necessary. [1]

The government already did all of this with cryptography, and it was a war hard won. So you have to make the airtight case that they won't do it again. Not the other way around. You have to prove that they have changed for the better. I don't have to prove anything because history is on my side.

Please, I beg of you, do not delude yourself that the US government wants what is best for you, while they are spending billions bombing hospitals overseas with tax dollars that could greatly benefit our own citizens. Do not delude yourself that it is just corporations, or just the government. It is CorpGov. They are, in the end, one in the same, in that they play ball when it suits them, and play against each other when it suits them. Don't be a sucker. And please don't accidentally gaslight others by throwing out accusations of conspiratorialism the moment they question the credibility of the US government.

I expect any followup response to dispense with the gaslighting and ad hominem, and focus on fostering a constructive debate. If you can't do that, just end the conversation here and do some hard thinking.

[0]: https://www.whitehouse.gov/briefing-room/statements-releases...

[1]: https://en.wikipedia.org/wiki/Crypto_Wars


The old search was much better than the new search. The new search can never find exact strings in my repos, even when I have copy-pasted those strings from my repo to the search bar!


Apple may already be headed in that direction. They already have unified CPU and GPU RAM. It doesn’t seem far-fetched to imagine that they could unify persistent storage and memory.


Knowing apple’s marketing moves, they could definitely do that: just use a single number to describe memory. And then pretend it’s a big number.


I can already see it: base 128GB of total system unified memory for your files and data.


Keep all other files in iCloud for big money. Genius


Well technically, Intel and AMD both use the regular system RAM for their integrated graphics VRAM, but I see what you mean.


Intel and AMD also do support unified memory for their integrated graphics. It’s been a while since you needed a statically cordoned-off area of main memory (“shared memory”) for the iGPU to work.

Consoles have been using unified memory since the 8th gen (PS4/XB1/Switch, kinda sorta even WiiU).

And NVidia has CUDA unified-memory slide decks going back at least 5 years.


The Xbox360 already had unified memory too. That gave it a slight edge compared to the PS3 in the long term because it was more flexible compared to a fixed 50:50 split.


The original xbox before it too.


And the N64 before them. I think it was the first console with unified memory.


Military contracts are posted and solicited publicly. There's no "dark" acquisition of the type that you are suggesting. You can look up if OpenAI has any contracts with the DoD at [0]. They do not.

[0] https://www.usaspending.gov/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: