Hacker Newsnew | past | comments | ask | show | jobs | submit | satellitemx's commentslogin

Genshin's UI font is produced by Hanyi, not Monotype.

https://www.hanyi.com.cn/adminlte/ueditor/image/20230906/169...


It's sad to see so many people are blinded by this. The current situation of RCS is just that Google saw Apple disguised iMessage as SMS and wanted to do the same. RCS is merely a vehicle for Google.

They could just layer their own chat platform on top of Google Messages but we all saw how Google's IM business went along: Chat, Hangouts, Alo, Meet etc. So they muddied the water so deep (to a carrier level) to make it look like it's Apple's issue for not adopting RCS. And people actually fall for it.

Nobody wanted RCS. Even carriers don't want to maintain RCS. They just use Jibe. And that's exactly what Google wanted. My RCS communication with friends don't even show up in carrier's usage. How is that ever different from iMessage...

You know who chose to selfhost their own RCS server? Yes, Chinese carriers! They call it 5G Message. New ad delivery channel for businesses hooray! Instead of plain text and a link, now your campaigns can even have MENUs inside! I can send SMS to a Chinese number, I can send iMessage to a Chinese number, but I can't send RCS. Truly "Universal" profile.


I agree with all of this except for the claim that "Google wanted this". I think Google is as annoyed with this situation as everyone else. They would've preferred to have their own iMessage alternative, but they launched a dozen which all failed, so they went "Well, we can't make our own that people want to use, so let's get the carriers to make an upgraded version of SMS". And then the carriers didn't want to do that but the "it's decentralized!" message stuck with users and even a few governments, so now RCS is the worst of all worlds: it's a de facto Google service, but with a janky, half-baked decentralized protocol, where Google has limited capability to improve it compared to a native Google chat app.

It's a complete shitshow.


Cinebench 2024 takes considerably longer to finish and would saturate the whatever possible heat headroom the device has. It can be treated as an accurate reading.

Chart below is the aggregated result from CPU-monkey, Geekerwan's chip analysis, devices of my own and various other reports.

Apple M1 series 3.2GHz 5W: ~115 Apple M2 series 3.5GHz 5W: ~120 Apple M3 series 4GHz 7.5W: ~140 Apple M4 series 4.4GHz 7.5W: ~170

Snapdragon X1E 4.3GHz 10-20W: ~140 Snapdragon X2E 5GHz >20W: ~160

AMD 9950X 5.7GHz (Desktop Best): ~140 AMD AI 9 HX 375 5.1GHz (Laptop Best): ~125

Intel Ultra 9 285K 5.7GHz (Desktop Best): ~145 Intel Ultra 9 285HX 5.4GHz (Laptop Best): ~135 Intel Ultra 9 288V 5.1GHz (Low Power Laptop Best): ~125

Apple M5 may be the first ever CPU to near ~200pts on Cinebench single-core while still maintaining less than 10W of core power draw. Competitors lose on all fronts by about 2 or even 3 generations at their respective device class.


Using Cinebench single score to compare desktop chips is very dishonest. It's an extremely parallel benchmark, where many powerful cores get you a much better score.

For example, the M4 does get around 170 single, but the Snapdragon X2E gets just under 2000 for multi, over double what the M4 scores. If your application is relevant to Cinebench, the X2E is a better CPU for that. To match the X2E you need to go up to the 16 cores M4 Max.

The 16 cores variant of M4 Max is only available on a 16inch MBP that starts at 4.8K€; it's not clear how much the X2E laptops will cost but I would bet a lot of money that it's going to be much less than that...

As for the desktop's parts, the only Apple product that beats the typical X86 CPUs in multi, is actually the M3 Ultra which is pretty bad deal because it doesn't scale very well in other ways (GPU). Otherwise, Intel (i9s/Core Ultra 9) and AMD (Ryzen 9) still hold the crowns in pure multicore score.

The score of an M4 Max 16 cores, actually puts you down in Core Ultra 7 265K territory. You can put together a system based around that CPU and a 5070Ti GPU (that raw bench around the same as the M4 Max 40 cores variant but will actually perform much better for most things) for a full 1200€ less than a Mac Studio equipped like that (it even has the luxury of double the storage). If you don't need dedicated GPU power or could do with a low-end GPU the savings would be between 2000-1700€ (the 5070Ti is an 800€ part).

Let's be real, the Apple Silicon SoCs are very power efficient but they are definitely not performance maxing and they are even less money efficient. It is very suspicious arguing about top performance while ignoring multicore.

Now here is another fact: the M4 Max 16 cores can draw more power than the 140W power adapter included with the 16-inch MPB. It has a battery capacity of 100Wh. If you run the things at full tilt or near that, it will actually have a runtime of less than an hour. It's actually funny because the Apple afficionados keep singing the praise of Apple Silicon and many have been burned by that unexpected fact. It's easy to shit on the high-power gamer type laptop that can't run well on battery but that's actually true as well if you use the full power of an Apple Silicon laptop. You might get like half an hour more runtime but that's basically irrelevant.

The reality is that everyone singing that Apple Silicon efficiency praise don't have truly demanding workloads otherwise the battery life wouldn't be a meaningful difference.

High performance laptops don't make a lot of sense whether they are Apples or other brands. They are mostly convenience products (for docking/undocking) or status symbols.


It’s to compare core architecture brother. The fact is Apple’s core has best IPC, has best efficiency, has best peak performance.

And you put out a long long long post to point out what everyone understands: putting more cores in and running at a lower frequency would yield better efficiency at full load… that’s why we got to the point in today’s x86 laptops, a single core running at full speed already exceeds sustained multicore power target (28-60w depend on device class) because Intel and AMD has no other way to up the performance other than adding more cores.


It does. -c:a aac_at uses Core Audio (_at means AudioToolbox).


According to App Review Guideline:

3.2.1 (vii) Apps may enable individual users to give a monetary gift to another individual without using in-app purchase, provided that (a) the gift is a completely optional choice by the giver, and (b) 100% of the funds go to the receiver of the gift. However, a gift that is connected to or associated at any point in time with receiving digital content or services must use in-app purchase.

Many years ago in China where WeChat said Apple was about to collect fees on reader's donation to writer's article (whether paywalled or not). WeChat was taking a cut on such donation. Later WeChat agreed to not taking such cut on donations and the payment not fullfilled by in-app purchase but rather WeChat Pay (as a means of fund transfer).

However, in the case of Patreon, often people are not donating to the creator, they are paying money to gain access to the creators' work (early access, exclusive membership content, etc.), thus Patreon is selling the product produced by creators.

Edit: I want to add that there should be a cut for Apple in this case, but not 30% or 15% (via subscription).


I don’t give a shit what’s written in Apples TOS. If they wrote they were entitled to a 10000% processing fee, would that be OK?

Apple are using their status as software/hardware vendor to force their users to use their App Store. This is obviously illegal. There is no excuse for this.


> I've tried to feed screenshots to ffmpeg and other tools, and it's just... unusable. It works, but consumes way too much resources.

Did you try to use the hardware encoder? Modern computers have chips to accelerate/offload video encode/decode. Your 2019 Mac has Intel GPU for H.264 and HEVC hw encoder, also it has an T2 co-processor that can also encode HEVC video. If you don't supply specific encoders (with _videotoolbox suffix on Mac) via -c:v then ffmpeg will default to sw encoder, which consumes CPU.

> how to scan the screen and send only parts of the screen that have been updated

You'll be reinventing video codecs with interframe compression.

> Also, curious to hear about video encoding efficiency vs 60x JPEG creation. Is it comparable?

I see that you are comparing pixel by pixel for each image to dedupe and also resizing the image to 1280px. Also the image has to be encoded to JPEG. All of the above are done in CPU. In essense you implemented Motion JPEG. Below is a command to allow you to evaluate a more effecient ffmpeg setup.

ffmpeg \

-f avfoundation -i "<screen device index>:<audio device index>" \ # specific to mac

-an \ # no audio

-c:v h264_videotoolbox \ # macos h.264 hardware encoder -r 1 \ # 1fps

-vf scale=1920:-1 \ # 1920px wide

-b:v 2M \ # 2Mbps bitrate for clear and legible text

out.mp4 # you may want to setup a RTMP server so that ffmpeg can transmit the encoded video stream to and allow visitors to view the video


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: