Hacker Newsnew | past | comments | ask | show | jobs | submit | dylanjha's commentslogin

Ever consider shipping this as an SDK that other apps can embed? This is something I’ve heard many people ask for.


i can work on it when i migrate from remotion library to pure canvas


Why do you want to migrate from remotion to pure canvas?


There are some mismatches between the rendered output and remotion real-time preview so having low-level control through pure canvas can fix this.


Not everyone has to use everything.

In my 13 years there are a lot of technologies I've never used. Maybe there was opportunities where you could have, or maybe things are working just fine for the problems you're solving and you don't need it, which is fine too.


> think there is a clear tension between the component desire for encapsulation vs the web designer wanting to impose their styling on the nested components

This is the inherent tension. It requires good web component authoring to expose:

1. `part`s that can be accessed by application-level CSS

2. slots for the application developer to inject html

3. CSS custom properties (--variable) -- these pierce the shadow DOM

The web component authors have to be very intentional about these things. There are good examples and bad examples and I think people are still learning how to do this well.


Isn’t the problem that this is recursive? i.e. if you use a component which itself contains a component that contains a button and that button should be styled should the intermediate components be worrying about it? And does the end consuming application need to care when new widgets are added to components nested two deep?

It is almost like the need for styling necessitates all the old dependency injection and factory gunk that older frameworks became notorious for.


Styling components nested inside other components can still work with either slots or scoping CSS custom properties with parent selectors.

.login-form { --button-outline-width: 2px; }

.credit-card-form { --button-outline-width: 2px; }

Of course, the component needs to be authored & documented in a way that supports this.

For example, shoelace has a "Drawer" component: https://shoelace.style/components/drawer

By default the drawer component has an "X" button in the header to close it. If you want to override that, instead of trying to style the nested "X" button you can pass in your own header actions with slot="header-actions"


(author from Mux here) -- that is correct. For the stuff we build for in-house use on mux.com and dashboard.mux.com we have a components library written in React.

You nailed it that we are shipping SDKs with visual components (like a video player) that need to be compatible across all kinds of frontends.

Instead of N number of SDKs to maintain where N is every front end framework, we have 2: a web component, and a React wrapper around the web component. Maybe in the (near) future we only have to maintain 1.


(author here) I think the problem might be more around bureaucracy mixed with bad & buggy code than it is a specific problem with web components. Can you expand on that?


Thanks, I think you're 100% right, it's the red tape more than the tech. My issue is that when an in house React component is poorly coded, it's trivial to inject css overrides or select internal elements of the component to change it's behavior. With web components, the events emitted are locked in, each sub element or worse sub web component is nested away and only accessible via the shadow dom making edge case or hot fix solutions that much more trouble to deal with. In both cases the ideal would be for the internal library maintainers to fix the issues or support the required features, but with ever long lead times, it's often necessary to work something out at the project level. The working out is much more abrasive with web components though.

In practice, this update is great for usability. But it's a sign that web components are still relevant, extending my time dealing with poorly built ones.


I see. Particularly with CSS there's more of an enforced contract around how the element internals get exposed for styling. If the element hasn't exposed `parts` or slots it's hard to hack around it.

More broadly speaking I have found myself getting thrashed by the ever evolving "best practices" in the React ecosystem. First: write everything using class components, and then a couple years later everything should be written with hooks.

I think the benefit of web components (for certain things) is that the APIs and the standards move slowly. Something you write today isn't going to feel obsolete in only a couple years.


> select internal elements of the component

If you need an equivalent Expedient Field Hack for a web component, remember it's an HTMLElement subclass and ends up in the DOM. So you can get the object back out of the DOM as 'just another internal element' and gutwrench it from there (or, depending, subclass it and register your subclass, or insert an extra entry into its prototype chain, or ...).

I mean, yes, those are all horrible, but so's digging into the guts of a react component you don't own, sometimes taking a dependency on the internals of somebody else's code is the least horrible option.

... it occurs to me that given a JS Function's .toString() returns the source code, on an e.g. lit-html component you could probably even grab the render function, yank the html`...` call out of it, regexp that to tweak the mark up, and then use an indirect eval to stuff it back in.

But really, I think "you now have the element object, monkey patch it to taste" is probably quite sufficient and far, far less ridiculous than that last paragraph, I just couldn't resist sharing the horrifying thought :)


Yeah, worth noting that there IS an escape hatch for monkey-patching (at your own risk). So you're not totally out of luck. It is just HTMLElement all the way down


It's amazing how the web has come full circle now.

2023 web developers: I figured it out y'all. Send HTML from the server.

1994 web developer: ...excuse me? What else would you do?


On my iPhone, when I click on home links (let's say Webhooks & notifications), I'm redirected to the middle of the target page. I'm sure it's a small bug, but it's kind of sad that basic navigation is half broken. On my pet documentation site [1], I've chosen the boring way:

- fully static HTML (Jekyll)

- minimal JavaScript (especially no tracking)

- search is local

I find my site quite reactive without complicated setup and can focus on the doc content. It's also perfect for SEO, "for free"...

[1]: https://hurl.dev


Even though I’m not deaf or hard of hearing, I find myself using captions more and more when I consume video content. I don’t think I’m alone in that. I saw something a while ago that tracked this as a broad trend. Today’s accessibility is tomorrow’s usability.

I also work on player stuff with OP and learned a lot in this process.


As someone who IS deaf, but has a cochlear implant I have been telling companies for years that it's not only people like me who get use from it, it's people that can read English, but may find oral/listening harder. It allows them to reach a wider audience especially if they have the captions translated. The last couple of years I told people that a lot of phone users for example will be browsing their phone in public and if your video requires sound people will continue scrolling instead of taking the time to watch.

I did manage to get Air NZ to caption videos too, but not sure if they still do. Their marketing person changed years back and they didn't bother continuing with it (for a time anyway - with uBlock and such, I don't see any marketing from them anymore), even though the metrics that came back from the captioning experiment were incredible.

Then I got on an international flight and the safety video had captions for every language... except English. Ha.

If only a senior dev could get a job in which a11y is a priority. It is something I wish to specialise in more so.

I've found with NZ companies nobody seems to care (because there's bugger all legislation in NZ for this).


You might consider applying to Apple. Accessibility is definitely a priority.


I can't stand captions being on. I tolerate them for foreign language stuff, obviously, but otherwise they really reduce me enjoyment of the material.

The funny thing is I'm someone who finds it difficult to hear people in noisy environments. But I never seem to have a problem with films or TV. Even films that many people seem to have trouble with, like Tenet.

My girlfriend always wants them on, though. I don't get it. But I have at least a couple of theories about why this is.

Firstly, she's a way faster reader than me. She reads easily twice as fast as me. But reading always takes priority in my brain. I can't not read a caption if it's on screen and that increased mental burden reduces my enjoyment.

Secondly, we watch things in different ways. When I watch something, it has 100% of my attention. Anything else is background noise, which I don't like. I'm much more selective about what I watch for that reason, as it's a bigger commitment. That means my listening circuits are fully engaged, but also I'm fully immersed in the film which means I benefit from additional context which is super important for listening.


Related story: "Why do all these 20-somethings have closed captions turned on? " – https://news.ycombinator.com/item?id=32879737


I remember that story. I’m definitely in that group although older. For reasons I don’t fully understand there is no meaningful difference for me in watching foreign media or media in my native English, and in both cases I require captions to understand dialogue. I am not hard of hearing in any way and don’t have problems in conversation, but I just can’t understand what anyone is saying on movies and TV. I rely nearly entirely on captions.


Do you have issues understanding people in noisy (a lot of background noise, not necessarily loud) environments (e.g. at private parties where there are other groups talking in the same room)? I have this issue (apparently dubbed "hearing in noise"), and it gets triggered by dialogue in TV shows as well.


> Today’s accessibility is tomorrow’s usability.

https://en.wikipedia.org/wiki/Curb_cut_effect


Wow that’s exactly it. I didn’t know there was a term for this. Thanks for sharing!


I recently watched Rings of Power, and I found the dialogue nearly impossible to understand without subtitles - everyone was so quiet all the time, almost whispering to each other, and almost never speaking towards the camera, so I wasn't getting the aid of seeing the sounds they were making either. The Numenoreans were ok usually but no one else was intelligible without subtitles. It happens in movies a lot too, mixing is often done to prioritize effects & music, rather than people actually understanding wtf anyone is saying.

All this to say, I turned on subtitles within one episode and it made for a much better viewing experience. Once I was able to read along I had no issue at all understanding the words too, but without the cue it was impossible.


This effect is especially bad for originally English series and movies. I understand English pretty well, but it's incredibly difficult for me to watch movies on a TV. Either I have to wear headphones, or set the TV volume uncomfortably loud.


I find the audio mix on a lot of movies is such that turning it up makes almost no difference. It barely improved the dialog loudness, just increases the obnoxiously loud music and sound effects.

Older media seems to have a more even mix. It is probably a 5.1 through my stereo speakers thing but I'm not sure how to fix it or if even can be fixed.


Japaneses put captions on everything due to the large amount of homophones in the Japanese speaking language; Words sound the same but are written with different kanji, having the text form avoid confusion.


Wait, then how do they handle... talking? It seems hard to believe that the spoken language is generally confusing without captions...


They handle it just fine, of course. I disagree with the statement that Japanese has a relatively high incidence of homophones. It is true that Japanese TV shows, particularly variety shoes and lighter fare like that, have a lot of on-screen captions and I don’t know the reason for that, but I doubt it’s homophones, and at any rates more “serious” shows like dramas as well as all movies are caption-free (but will likely have a subtle track you can enable).


I lived in Japan in the 90s and watched a LOT of TV there. From my experience most subtitling on TV fell into 2 categories:

On anime and children's shows, the theme songs were frequently subtitled. I assumed this was so viewers could appreciate the lyrics

Comedy, skit and variety shows, where the dialog and commentary is mostly banter amongst a cast of "wacky" hosts. Here, subtitles (almost always in a garish, colorful font) served to punctuate jokes or funny lines, in the same way that a laugh track on an American sitcom is used to let the audience know when to laugh (even though the Japanese shows usually had a laughing studio audience as well).

Some shows did (do?) go overboard with the "comedic" subtitles, to a point where they were subtitling almost every other line a host said.


Going by my poor Japanese listening skills and some interviews from documentaries I've seen, Japanese speakers mostly stick to common words and idioms in conversations. They also "over-explain" by repeating, rephrasing, or even reacting to their own points.

So, it's just like conversations in any language: basic, rambling, and emotive.


Talk on documentary is tend to be like that. It's not general.


If I'm watching something I want to retain, I intentionally reduce the volume and turn on closed captioning.

I do wish closed captioning worked better with speed controls, though.


What do you mean by working better with speed controls? The playback speed multiplier affects both the video and captions for me on Youtube. I like opening up the "transcript" too from the hamburger menu on YT.

For local playback of media files, I use daum kakao Potplayer, I have never come across something more power user friendly. MPV is weaker in a lot of areas, in my opinion. Potplayer interfaces with everything I'd want including vaporsynth and madvr and it has options for subtitle resync based on a multiplier and also converting based on framerate 60 -> 30. There is even an option for live subtitle translations. I personally like using it with smooth video project, but I understand it's not everyone's taste.


I enabled captions to watch the AppleTV series Slow Horses. I’m a native English speaker without hearing problems, but something about the slurred fast dialog and oddball slang made it hard to follow.

I really enjoyed having the captions for so many other reasons that I never turned them off. It’s great for multi-tasking and avoiding interruptions from environmental sounds like barking dogs, loud annoying children, phone calls, et cetera.


I think the trend comes from services, like YouTube, enabling captions by default. I almost always disable them, because they’re distracting. However, sometimes I can’t be bothered. I suspect many people can’t be bothered even more often.

I enable captions sometimes for single words, which I failed to hear right, although usually in those instances captions show something completely nonsensical.


Hey there :). Author here. It was a fun experiment to play around with some different strategies for adding content moderation to https://stream.new

Hive ended up being the one I landed on after trying Google Vision first (https://cloud.google.com/vision).

The other one I was looking at is Clarity.ai but I didn't get a chance to try that one yet.


stream.new seems really cool. However there is no account button to see all of your video URLs, or a download option for the video. If there was, I would probably make it my default (not sure if that is what you want)


Thanks! Yeah that would be a significant improvement.

This started as a little demo project with Nextjs + Mux and then evolved into more of an actual product (https://github.com/muxinc/stream.new).

Right now the lightweight utility aspect of stream.new feel right, but if we continue to build upon it as a standalone free product then adding the concept of an "account" with saved videos makes a ton of sense.


One thing that wasn't obvious to me is, why did you care about uploads of NSFW? As I understood it, you want to become Imgur of video. Imgur only became so big because they allowed NSFW stuff.


Not involved with this project, but there's a couple big reasons most would care about this.

* Child porn and similar content that is a level beyond simply "NSFW"

* Uploaders of NSFW stuff are always in need of a new platform they haven't been kicked off yet, and newer platforms are likely to be dominated with this type of content. Unless you want your platform to gain a reputation as the place for mostly NFSW content, you probably don't want this.


In addition to what others have said:

* Porn is almost always posted in violation of copyright.

* Hosting porn opens you up to legal issues if you can't verify that everyone involved is adult and consenting.

* Payment processors, hosting companies and other service providers you rely on usually have strict policies excluding porn.

And that's just addressing legal pornography, not other "NSFW" content like child abuse, animal abuse, general violence or gore. If you run a large enough public user generated content service people will use it to distribute illegal content or flood it with jihadist execution videos to ruin someone's day.


NSFW affects not only the site, but its users.

If a site is specifically intended as, or is incidentally used in the course of, professional work, then NSFW content can literally get your users fired if overseen, logged, or otherwise detected.

That turns out to shift your user demographics in the medium-to-longer term, and not in a direction that's generally compatible with high-quality and engaging content and interactions.

Where NSFW content is permitted, it should be specifically tagged as such, and not presented unless users specifically opt in to it.


Great write up! Curious if you/Mux have ever considered offering content moderation alongside content hosting? Seems like most platforms do one or the other, but I imagine you could charge quite a premium if you offered both in tandem.


Is there a way to delete videos again?


Everyone is part-time moonlighting, but how many others are involved?

If only 2 or 3 of you then I think the range you're asking for makes sense.


This is one for video nerds. From Demuxed in October 2019: "Three Roads to Jerusalem" about low latency live streaming. Incredibly well done by Will Law. Informative and entertaining.

https://www.twitch.tv/videos/501523712


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: