Items tagged with: Website
Sehr nützlich um den #HTML #Farbcode einer Farbe von einer Seite herauszufinden
Das das #Smilies Einfügen in #Friendica noch nicht optimal gelöst ist benutze ich dieses Addon um mit zwei Klicks Smillies in einem Textfeld einzufügen.
In Verbindung mit meiner Nextcloud Instanz ist das eine Wunderbare Möglichkeit meine #Lesezeichen über mehrere #Rechner zu Synchronisieren und die #Daten in der eigenen Hand zu behalten.
Da in #Vivaldi anders als in #Google #Chrome es keine Möglichkeit gibt mit einem Rechtsklick eine #Website zu übersetzen habe ich mir mit diesem nützlichen Addon ausgeholfen. In dem man den Text markiert kann man sich mit nur einem Klick sich die #Übersetzung auf der gleichen Seite als #Popup anzeigen lassen oder man öffnet mit einem Rechtsklick die Google Übersetzer Seite in einem neuen Tab in dem der vorhin markierte Text schon bereits eingefügt ist.
Eine für mich sehr wichtige #Erweiterung die es mir ermöglicht, in Verbindung mit meiner #Nextcloud Instanz, meine #Passwörter in der eigenen Hand zu behalten.
Screencastify – Screen Video Recorder
Mit #Screencastify kann man beliebige #Browserfenster und sogar den eigenen #Desktop aufnehmen. Das besondere aber an diesem #Addon ist das man die Videos hinterher weiter bearbeiten kann. Man kann die #Videos zuschneiden und sogar nur Ausschnitte der gesamten Aufnahme ausschneiden.
Und natürlich uBlock Origin. Dieser #Addblocker ist ein muss für jeden #Browser, man hat die Möglichkeit sogar selbst zu bestimmen welche Teile einer Website blockiert werden. uBlock wurde 2018 von einem Unternehmen übernommen also achten sie dadrauf #uBlock #Origin, die vom „Raymond Hill (gorhill)“ den ursprünglichen Entwickler, zu nehmen.
Can I predict the winners of the 2019 F1 season by looking at the performance of their websites? No. But I'm gonna anyway.
Article word count: 7610
HN Discussion: https://news.ycombinator.com/item?id=19558030
Posted by anacleto (karma: 3643)
Post stats: Points: 135 - Comments: 43 - 2019-04-02T20:03:21Z
#HackerNews #fastest #has #the #website #who
Posted 19 March 2019
I was trying to make my predictions for the new Formula One season by studying the aerodynamics of the cars, their cornering speeds, their ability to run with different amounts of fuel. Then it hit me: I have no idea what Iʼm doing.
So, Iʼm going to make my predictions the only way I know how: By comparing the performance of their websites. Thatʼll work right?
If anything, itʼll be interesting to compare 10 sites that have been recently updated, perhaps even rebuilt, and see what the common issues are. Iʼll also cover the tools and techniques I use to test web performance.
Iʼm going to put each site through WebPageTest to gather the data on Chrome Canary on a Moto G4 with a 3g connection.
Although a lot of us have 4g-enabled data plans, we frequently drop to a slower connection. A consistent 3g connection will also suggest what the experience would be like in poor connectivity.
Trying to use a site while on poor connectivity is massively frustrating, so anything sites can do to make it less of a problem is a huge win.
In terms of the device, if you look outside the tech bubble, a lot of users canʼt or donʼt want to pay for a high-end phone. To get a feel for how a site performs for real users, you have to look at mid-to-lower-end Android devices, which is why I picked the Moto G4.
Iʼm using Chrome Canary because WPT seems to be glitchy with regular Chrome right now. Yeah, not a great reason, but the test wasnʼt useful otherwise.
Oh, and whenever I talk about resource sizes, Iʼm talking about downloaded bytes, which means gzip/brotli/whatever.
Calculating race time
The amount of bandwidth your site uses matters, especially for users paying for data by the megabyte. However Iʼm not going to directly score sites based on this. Instead, Iʼm going to rate them on how long it takes for them to become interactive. By "interactive", I mean meaningful content is displayed in a stable way, and the main thread is free enough to react to a tap/click.
Caching is important, so Iʼm going to add the first load score to the second load score.
Thereʼs some subjectivity there, so Iʼll try and justify things as I go along.
Issues with the test
Iʼm not comparing how ʼgoodʼ the website is in terms of design, features etc etc. In fact, about:blank would win this contest. I love about:blank. It taught me everything I know.
Iʼm only testing Chrome. Sorry. Thereʼs only one of me and I get tired. In fact, with 10 sites to get through, itʼs possible Iʼll miss something obvious, but Iʼll post the raw data so feel free to take a look.
I only used "methodology" as a heading to try and sound smart. But really, this is fun, not science.
Also, and perhaps most importantly, the results arenʼt a reflection of the abilities of the developers. We donʼt know how many were on each project, we donʼt know what their deadline was or any other constraints. My goal here is to show how I audit sites, and show the kind of gains that are available.
Bonus round: images
For each site Iʼll take a look at their images, and see if any savings can be made there. My aim is to make the image good-enough on mobile, so Iʼll resize to a maximum of 1000px wide, for display at no larger than 500px on the deviceʼs viewport (so it looks good on high density mobile devices).
Mostly, this is an excuse to aggressively push Squoosh.
Ok. Itʼs lights out, and away we go…
Mercedes make cars. You might have seen one on a road. Theyʼre a big company, so their F1 team is a sub brand. Theyʼre also one of the few teams with the same drivers as 2018, so this might not be a ʼnewʼ site. Letʼs take a look:
Website. WebPageTest results.
Ok, this is going to be a longer section than usual as I go through the tools and techniques in more detail. Stick with me. The following sections will be shorter because, well, the technique is mostly the same.
I like WebPageTest as it runs on a real devices, and provides screenshots and a waterfall. From the results above, the median first run is ʼRun 3ʼ, so letʼs take a look at the ʼFilmstrip viewʼ:
WebPageTestʼs waterfall diagram allows us to match this up with network and main thread activity:
You donʼt need to use WebPageTest to get this kind of overview. Chrome DevToolsʼ "Performance" panel can give you the same data.
Notice how the waterfall has ʼstepsʼ. As in, entry 23 is on another step from the things before, and step 50 is on yet another step.
This suggests that something before those things prevented them from downloading earlier. Usually this means the browser didnʼt know in advance that it needed that resource.
The green vertical line shows ʼfirst renderʼ, but we know from the filmstrip that itʼs just a spinner. Letʼs dig in.
The green line appears after the first ʼstepʼ of items in the waterfall. That suggests one or more of those resources was blocking the render. Viewing the source of the page shows many tags in the head.
These block rendering by default. Adding the defer attribute prevents them blocking rendering, but still lets them download early. Ire Aderinokun wrote an excellent article on this if you want to know more.
Adding defer to those scripts will allow the page to render before those scripts have executed, but there might be some additional work to ensure it doesnʼt result in a broken render. Sites that have been render-blocked by JS often come to rely on it.
Ideally pages should render top-to-bottom. Components can be enhanced later, but it shouldnʼt change the dimensions of the component. I love this enactment of reading the Hull Daily Mail website, especially how they capture the frustration of layout changes as well as popups. Donʼt be like that.
For instance, interactive elements like the carousel could display the first item without JS, and JS could add the rest in, plus the interactivity.
Looking at the page source, the HTML contains a lot of content, so theyʼre already doing some kind of server render, but the scripts prevent them from taking advantage of it.
Over on WebPageTest you can click on items in the waterfall for more info. Although at this point I find Chrome Devtoolsʼ "Network" panel easier to use, especially for looking at the content of the resources.
The render-blocking scripts weigh in around 150k, which includes two versions of jQuery. Thereʼs over 100k of CSS too. CSS also blocks rendering by default, but you want it to block for initial styles, else the user will see a flicker of the unstyled page before the CSS loads.
Splitting CSS isnʼt as easy. Keeping CSS tightly coupled with their components makes it easier to identify CSS that isnʼt needed for a particular page. I really like CSS modules as a way to enforce this.
There are tools that automate extracting ʼabove the foldʼ CSS. Iʼve had mixed results with these, but they might work for you.
The second ʼstepʼ of the waterfall contains some render-altering fonts. The browser doesnʼt know it needs fonts until it finds some text that needs them. That means the CSS downloads, the page is laid out, then the browser realises it needs some fonts. in the
would be a quick-win here. This means the browser will download the fonts within the first step of the waterfall. For more info on preloading, check out Yoav Weissʼ article.
Also, the fonts weigh in around 350k, which is pretty heavy. 280k of this is in TTF. TTF is uncompressed, so it should at least be gzipped, or even better use woff2, which would knock around 180k off the size. font-display: optional could be considered here, but given itʼs part of corporate identity, sighhhhhhhh it might not get past the brand folks.
Thereʼs another render-blocking script in this part of the waterfall. The download starts late because itʼs at the bottom of the HTML, so it should be moved to the
and given the defer attribute.
Then, the Moto 4 gets locked up for 3 seconds. You can see this from the red bar at the bottom of the waterfall. WebPageTest shows little pink lines next to script when theyʼre using main-thread time. If you scroll up to row 19, you can see itʼs responsible for a lot of this jank.
Letʼs take a look at the second load:
Aside from the HTML, none of the resources have Cache-Control headers. But browser heuristics step in, and as a result the cache is used for most assets anyway. This is kinda cheating, and unlikely to reflect reality, but hey, I canʼt change the rules now. For a refresher on caching, check out my article on best practices.
The lack of caching headers seems like an oversight, because a lot of the assets seem to have version numbers. Most of the work has been done, it just lacks a little server configuration.
Using Squoosh I could get the first image down from 50k to 29k without significant loss. See the results and judge for yourself.
So, Mercedes storm into 1st place and last place at the same time, as often happens with the first result.
* good HTTPS * good HTTP/2 * good Gzip, except TTF * good Minification * good Image compression * bad Render-blocking scripts * bad Late-loading fonts * bad No cache control * bad Unnecessary CSS * bad Unnecessary JS * bad Badly compressed fonts
My gut feeling is the quick-wins (preloading, compressing fonts) would half this time. I imagine the refactoring of the JS and CSS would be a much bigger task, but would bring big results.
Ferrari also make cars. You might have seen one on a road, driven by a wanker (except my friend Matt who bucks this trend. Hi Matt!).
They have a new driver this year, and a slightly different livery, so this site may have been changed fairly recently.
Website. WebPageTest results.
Letʼs dive in:
Wow thatʼs a lot of requests! But, Ferrari use HTTP/2, so it isnʼt such a big deal. I should mention, this article is a real workout for your scrolling finger.
The stand-out issue is that huge row 16. Itʼs a render-blocking script.
Itʼs also on another server, so it needs to set up a new HTTP connection, which takes time. You can see this in the waterfall by the thinner green/orange/purple line which signifies the various stages of setting up a connection.
However, the biggest issue with that script, is itʼs 1.8mb. Thereʼs also an additional 150k script that isnʼt minified, and other scripts which sit on different servers.
The CSS is 66k and 90% unused, so this could benefit from splitting. There are a few fonts that would benefit from but theyʼre pretty small. Letʼs face it, everything is small compared to the JS. Oddly, Chromeʼs coverage tool claims 90% of the JS is used on page load, which beggars belief.
No, not the whole thing. The logo. No, not the whole logo, thatʼs SVG. But the horse, the horse is a base64 PNG within the SVG:
Look at it. Itʼs beautiful. Itʼs 2300x2300. Itʼs 1.7mb. 90% of their performance problem is a massive bloody horse. That logo appears across the main Ferrari site too, so itʼs probably something the creator of the F1 site had little control over. I wonder if they knew.
Again, there seems to be server rendering going on, but itʼs rendered useless by the script.
There also seems to be multiple versions of the same image downloading.
The site has ok caching headers, so Iʼm surprised to see the browser revalidating some of those requests.
First things first, letʼs tackle that logo. SVG might be the best format for this, but with the bitmap-horse replaced with a much simpler vector-horse (Vector Horseʼs debut album is OUT NOW). However, I donʼt have a vector version of the horse, so Iʼll stick with a bitmap.
The logo is only ever displayed really-tiny in the top-left, at a height of 20px. Using Squoosh, Iʼm going to make a 60px version so it stays sharp on high-density devices. That takes the image from 1.7mb, to a 2.6k PNG, or a 1.7k WebP. See the results.
One of the initial images on the site is of the 2019 Ferrari. However, itʼs 1620px wide, and not really optimised for mobile. I took the image down from 134k to 23k as JPEG, or 16k as WebP without significant loss. But, you can be the judge. WebP shines here because JPEG struggles with smooth gradients – at lower sizes it creates banding which is really noticeable.
The page also contains little track maps that arenʼt initially displayed. These should be lazy-loaded, but they could also do with a bit of compression. I was able to get one of them down from 145k to 27k as a PNG without noticeable loss. Again, you be the judge.
* Mercedes 19.5s * Ferrari 46.1s
Last place for now.
Most of their problem is down to one horse. I donʼt think anyone did that deliberately, but RUM & build time metrics would have made it obvious.
* good HTTPS * good HTTP/2 * good Gzip * good Caching * bad Render-blocking scripts * bad Unnecessary JS * bad Unnecessary CSS * bad Unminified scripts * bad Unstable render * bad Poor image compression * bad Late-loading fonts * bad Main thread lock-up * bad Horse
Red Bull arenʼt a car company. They sell drinks of a flavour I can only describe as "stinky medicine". But, theyʼre also a much more modern, tech-savvy company, so itʼll be interesting to see if it shows here.
Website. WebPageTest results.
The user experience is 4.9s of nothing, but the result is a broken UI. Things sort themselves out at around 6.5s. Thereʼs a font switch at 9.5s, and a horrendous cookie warning at 16s. But, Iʼd call this visually ready at 6.5s.
Unfortunately we canʼt call this page ready at 6.5s, as the main thread is locked up until 11s. Still, this takes it into 1st place by a couple of seconds.
The story here is very similar to the previous sites. The page contains what looks like a server render, including minified HTML, but render-blocking scripts prevent it being shown. The scripts should use defer.
The CSS is 90% unused, and the JS is ~75% unused, so code-splitting and including only whatʼs needed for this page would have a huge benefit. This might help with the main thread lock-ups too.
Again, the fonts start loading way too late. would be a quick win here.
The icon font times-out, which causes a broken render. There isnʼt really a good reason to use icon fonts these days. Icon fonts should be replaced with SVG.
The site does use , but itʼs mostly used for JS and CSS, which donʼt really need preloading as theyʼre already in the
. Worse, they preload a different version of the scripts to the ones they use on the page, so theyʼre doubling the download. Chromeʼs console shows a warning about this.
Thanks to decent caching, we get a render really quickly. However, JS bogs down the main thread for many seconds afterwards, so the page isnʼt really interactive until the 4.8s mark.
The images could be smaller though. Taking the first image on their page, Using Squoosh I can take it down from 91k to 36k as a JPEG, or 22k as WebP. See the results.
They also use a spritesheet, which might not be necessary thanks to HTTP/2. By bringing the palette down to 256 colours, I can take it down from 54k to 23k as PNG, or 18k as WebP. See the results.
* Mercedes 19.5s * Ferrari 46.1s * Red Bull 15.8s
Despite issues, Red Bull are straight into 1st place by almost 4 seconds. Nice work!
* good HTTPS * good HTTP/2 * good Gzip * good Minification, including HTML * good Image compression * good Caching * bad Render-blocking scripts * bad Main thread lock-up * bad Unnecessary CSS * bad Unnecessary JS * bad Late-loading fonts * bad Unnecessary icon fonts
Back to folks that make cars. Car adverts tend to be awful, but I reckon Renault make the worst ones. Also, they may have spent too much money on one of their drivers and not have enough left over for their website. Letʼs see…
Website. WebPageTest results.
The user gets 5.8s of nothing, then a broken render until 7.8s, but the intended content of the page doesnʼt arrive until 26.5s. Also, asking for notification permission on load should be a black flag situation, but Iʼll be kind and ignore it.
As with previous sites, render-blocking scripts account for the first 5.8s of nothing. These should use defer. However, this script doesnʼt get the page into an interactive state, it delivers a broken render.
Then, something interesting happens. The scripts that are needed to set up the page are at the bottom of the HTML, so the browser gives these important scripts a low priority.
You can see from the waterfall that the browser starts the download kinda late, but not that late. The darker area of the bars indicates the resource is actively downloading, but in this case the scripts are left waiting until other things such as images download. To fix this, these important scripts should be in the
and use the defer attribute. The page should be fixed so the before-JS render is usable.
The CSS is 85% unused, and the JS is ~55% unused, so it would benefit from splitting.
As with the other pages the fonts load late. Itʼs especially bad here as images steal all the bandwidth (more on that in a second). Preloading fonts is a huge & quick win, and icon fonts should be replaced with SVG.
The caching is pretty good here, but a few uncached scripts push the complete render back to 5.9s. The first lap fixes would help here, along with some Cache-Control headers.
Images play quite a big part in the performance here.
I took their first carousel image and put it through Squoosh. This took the size from 314k down to 59k as a JPEG, or 31k as WebP. I donʼt think the compression is too noticeable, especially as text will be rendered over the top. Judge for yourself.
Their driver pictures are PNGs, which is a bad choice for photo data. I can get one of them from 1mb down to 21k as a JPEG. See the results.
* Mercedes 19.5s * Ferrari 46.1s * Red Bull 15.8s * Renault 32.4s
The download priority of those important scripts hits hard here, and the images still all the bandwidth. These things can be easily fixed, but are slowing the site down by 15+ seconds.
* good HTTPS * good HTTP/2 * good Gzip * good Minification * good Caching * bad Image compression * bad Render-blocking scripts * bad Unstable render * bad Main thread lock-up * bad Unnecessary CSS * bad Unnecessary JS * bad Late-loading fonts * bad Unnecessary icon fonts
Haas are the newest team in F1 (unless you count rebrands), so their website should be pretty new. Theyʼve kept their drivers from last year, despite one of them managing to crash at slow speeds for no reason, and blame another driver who wasnʼt there.
Their car looks pretty different this year with the arrival of a new sponsor, Rich Energy. Yʼknow, Rich Energy. The drink. You must have heard of it. Rich Energy. Itʼs definitely not some sort of scam.
Website. WebPageTest results.
The user gets 4.5s of nothing, and then itʼs interactive! Pretty good!
It takes a long time for that first image to show, but hey, it doesnʼt block interactivity.
It terms of improvements, itʼs a similar story. Thereʼs a server render, but itʼs blocked by render-blocking scripts in the
These come from a couple of different servers, so they pay the price of additional HTTP connections. But, the amount of script blocking render is much smaller than other sites weʼve looked at so far.
Only a fraction of the JS and CSS is used, so splitting those up would really improve things. The main CSS isnʼt minified which hurts load time slightly.
Again, font preloading would help here.
Their second load is slower than their first. Because their overall load time for the first load is network-limited, things like image decoding and script execution happen gradually. In this case, it all lands at once.
Things arenʼt great here. The first couple of carousel images are 3mb each, and look like they came straight off a digital camera. I guess they were uploaded via a CMS which doesnʼt recompress the images for the web.
Using Squoosh (have I mentioned Squoosh yet?), I can take those thumbnails from 3mb to around 56k as a JPEG, and 44k as WebP. See the results.
* Mercedes 19.5s * Ferrari 46.1s * Red Bull 15.8s * Renault 32.4s * Haas 12.5s
Despite problems, they jump into 1st place! However, it feels like it could be a lot faster by solving some of the main-thread issues.
* good HTTPS * good HTTP/2 * good Gzip * good Minification (mostly) * good Caching * bad Image compression * bad Render-blocking scripts * bad Main thread lock-up * bad Unnecessary CSS * bad Unnecessary JS * bad Late-loading fonts
McLaren do sell the occasional car, but theyʼre a racing team through and through. Although, in recent years, theyʼve been a pretty slow racing team, and as a result theyʼve lost their star driver. But will the website reflect the problems theyʼre having on track?
Website. WebPageTest results.
The first problem we see in the waterfall is the amount of additional connections needed. Their content is spread across many servers.
Their main CSS is 81k, but 90% unused for the initial render. Ideally the stuff needed for first render would be inlined, and the rest lazy-loaded.
Then thereʼs a request for some fonts CSS (row 6). Itʼs on yet another server, so we get another connection, and it serves a redirect to yet another server (row 10).
The CSS it eventually serves is 137k and uncompressed, and is mostly base64 encoded font data.
This is the biggest thing blocking first render. Itʼs bad enough when text rendering is blocked on font loading, but in this case the whole page is blocked. Ideally, required fonts should be served as their own cachable resources, and preloaded.
Then we get their JS bundle, which is 67% unused, so could benefit from some code splitting. Itʼs also located at the bottom of the HTML, so itʼd benefit from being in the
with defer. This would make it start downloading much sooner.
Then we get a request for ʼjsrenderʼ, which seems to be the framework this site uses. This sits on yet another server, so they pay the price of yet another connection.
The content that appears at 24s is included as JSON within the HTML, and it isnʼt clear why it takes so long to render. Their main script uses a lot of main thread time, so maybe itʼs just taking ages to process all the template data, and isnʼt giving priority to the stuff that needs to appear at the top of the page.
Caching headers are mostly absent, but browser heuristics make it look better than it is. The number of connections needed still hits hard, as does the JS processing time.
The content doesnʼt appear until 16.4s, and itʼs worth noting that the main thread is still really busy at this point.
They have a 286k spritesheet which needs breaking up, and perhaps replaced with SVG. Usually I would try to reduce the palette on something like this, but it loses too much detail. Anyway, I can get it down from 286k to 238k as a PNG, and 145k as WebP without any loss at all. See the results.
They avoid downloading large images until the user scrolls, which is nice. Taking one of the images that arrives, I can get it down from 373k to 59k as a JPEG and 48k as WebP. See the results.
* Mercedes 19.5s * Ferrari 46.1s * Red Bull 15.8s * Renault 32.4s * Haas 12.5s * McLaren 40.7s
* good HTTPS * good HTTP/2 * good Gzip * good Minification * good Lazy-loading images * bad Render-blocking scripts * bad Unstable render * bad No cache control * bad Image compression * bad Main thread lock-up * bad Unnecessary CSS * bad Unnecessary JS * bad Late-loading fonts * bad Too many HTTP connections
Lance Stroll wanted to be a racing driver, so daddy bought him an entire F1 team. That team is called Racing Point because I dunno maybe they were trying to pick the dullest name possible.
Website. WebPageTest results.
The user gets nothing for 45s, but the intended top-of-page content isnʼt ready until the 70s mark.
Also, the main thread is too busy for interaction until 76s.
The page starts by downloading 300k of CSS thatʼs served without minification, and without gzip. Minifying and gzipping would be a quick win here, but with over 90% of the CSS unused, itʼd be better to split it up and serve just what this page needs.
Also, this CSS imports more CSS, from Google Fonts, so it pays the cost of another connection. But the main problem is the browser doesnʼt know it needs the fonts CSS until the main CSS downloads, and by that time thereʼs a lot of stuff fighting for bandwidth.
would be a huge and quick win here, to get that CSS downloading much earlier. Itʼs the CSS thatʼs currently blocking first render.
The site also suffers from late-loading scripts.
These scripts are at the bottom of the
, so they donʼt block rendering. However, if this JS is going to pop-in content at the top of the page, that space should be reserved so it doesnʼt move content around. Ideally a server-render of the carouselʼs first frame should be provided, which the JS can enhance. Also, these scripts should be in the with defer so they start downloading earlier. Like the CSS, they lack minification and gzipping, which are quick wins. The JS is also 64% unused, so could be split up.
The carousel JS waits until the images (or at least the first image) has downloaded before displaying the carousel. Unfortunately the first image is 3mb. Thereʼs also a 6mb image on the page, but that isnʼt blocking that initial content render. These images need compressed (more on that in a second). The carousel should also avoid waiting on the images, and maybe provide a low resolution version while the full image downloads.
The main thread is then locked up for a bit. I havenʼt dug into why, but I suspect itʼs image decoding.
For this one, image performance really matters in terms of first render. I got that 3mb image down to 57k as a JPEG, and 39k as WebP. The WebP is noticeably missing detail, but WebPʼs artefacts are less ugly than JPEGʼs so we can afford to go lower, especially since this image sits behind content. You be the judge.
Also, their logo is 117k. Iʼm sure the brand folks would like it to load a bit quicker. By resizing it and reducing colours to 100, I got it to 13k as a PNG, and 12k as WebP. See the results. An SVG might be even smaller.
* Mercedes 19.5s * Ferrari 46.1s * Red Bull 15.8s * Renault 32.4s * Haas 12.5s * McLaren 40.7s * Racing Point 84.2s
You might have to scroll a bit to see this one.
The problem here is compression and timing. With image, CSS, JS compression, and some CSS preloading, this score would be < 20s. With a good server render of the carousel, it could be < 10s. Unfortunately itʼs back-of-the-grid for Racing Point.
* good HTTPS * good HTTP/2 * good Caching * good No render-blocking scripts * bad No gzip * bad No minification * bad Image compression * bad Unstable layout * bad Main thread lock-up * bad Unnecessary CSS * bad Unnecessary JS * bad Late-loading fonts * bad Late-loading CSS * bad Late-loading JS
Alfa Romeo make cars, but they donʼt really make the one they race. The engine is a Ferrari, and the aero is the work of Sauber, which was the name of this team before 2019. But, a new brand often come with a new website. Letʼs see how it performs…
Website. WebPageTest results.
The user gets nothing for 7.9s, but they only get a spinner until 15.6s. You have to scroll to get content, but Iʼll count this as interactive.
The render is blocked until the Google Fonts CSS downloads, which points to a similar problem as Racing Point.
And yep, their main CSS (row 2) imports the Google Fonts CSS (row 9). This should be preloaded to allow the two to download in parallel. Itʼs especially bad here, as the font theyʼre downloading is Roboto, which already exists on Android, so no fonts are actually needed.
The main CSS is pretty small, but still 85% unused, so splitting and perhaps inlining would help a lot here.
After parsing the HTML, the browser discovers a load of images it needs to download, then a couple of scripts at the bottom of the page.
However, these scripts are essential to the rendering of the page, so theyʼre loading far too late. A quick win would be to move them to the
(they already have defer). This would save around 8 seconds.
The JS is also 85% unused, so splitting would definitely help.
This page shouldnʼt need a spinner. Instead, it could have a before-JS render. The page is mostly static, so there arenʼt too many challenges here.
The things which would make the first lap faster would also help here.
The image right at the top of the page is 236k. Using Squoosh, I managed to get this down to 16k as a JPEG, and 9k as a PNG. See the results.
The page also loads a lot of logos. None of them are particularly big themselves, but they add up. I took one example from 18k down to 7k as a PNG. See the results. This kind of saving across all the logos would be significant.
* Mercedes 19.5s * Ferrari 46.1s * Red Bull 15.8s * Renault 32.4s * Haas 12.5s * McLaren 40.7s * Racing Point 84.2s * Alfa Romeo 20.1s
My gut tells me the quick-wins would knock 10 seconds off this time.
* good HTTPS * good HTTP/2 * good Caching * good Gzip * good Minification * bad Render-blocking scripts * bad Image compression * bad Main thread lock-up * bad Unnecessary CSS * bad Unnecessary JS * bad Late-loading fonts * bad Late-loading CSS * bad Late-loading JS
Itʼs the "stinky medicine" folks again. They fund two ʼindependentʼ teams in F1, with this one being a kinda "B team". But, can they beat their sister team on the web?
Website. WebPageTest results.
Unfortunately the main thread is blocked until 8s. This looks like image decoding, but Iʼm not sure.
Very much like the other sites weʼve seen, they serve HTML but itʼs blocked by render-blocking scripts. They should use defer.
The CSS is 90% unused, and the JS is over 50% unused. Splitting would help here. There are also late-loading scripts that would benefit from being in the
with defer. Thereʼs also some unminified scripts thrown in there too. But, despite this, their JS & CSS isnʼt too big.
The biggest performance problem this site has is fonts. Thatʼs why the text comes in late.
The fonts are late-loading, so preloading them would have a huge benefit here.
The fonts are also a little big. Using woff2 would be a quick win here, but it might be worth considering getting rid of some of the fonts all together. But yeahhhhh that can be a tough argument to have.
We get content at 5s, but the main thread is locked until 7s. Again, I think this is down to images.
The caching headers are mostly great, except for the fonts which only cache for 30 seconds. Thatʼs why we see so many revalidations in the waterfall above. Allowing the fonts to cache for longer would have saved a good few seconds here.
Youʼll never guess what. I decided to use Squoosh here. Their top image isnʼt really optimised for mobile. I managed to get it down from 152k to 26k as a JPEG. See the results.
* Mercedes 19.5s * Ferrari 46.1s * Red Bull 15.8s * Renault 32.4s * Haas 12.5s * McLaren 40.7s * Racing Point 84.2s * Alfa Romeo 20.1s * Toro Rosso 12.8s
So close! Only a couple of tenths off Haas. With font preloading and some image compression, theyʼd easily be in 1st place.
* good HTTPS * good HTTP/2 * good Gzip * good Minification (mostly) * good Caching (mostly) * bad Image compression * bad Render-blocking scripts * bad Main thread lock-up * bad Unnecessary CSS * bad Unnecessary JS * bad Late-loading fonts
Williams are probably my favouite team. Theyʼre relatively small, independent, and have an incredible history. However, their last few seasons have been awful. Hopefully that wonʼt reflect on their website…
Website. WebPageTest results.
The first thing that stands out here is all the additional connections.
I thought this meant they were using a lot of different servers, but a closer look shows theyʼre using old HTTP/1. This means the browser has to set up separate connections for concurrent downloads. A switch to HTTP/2 would be a big win.
But look, those fonts are arriving nice and early. A quick look at their source shows theyʼre using , yay!
Then we get their CSS, which is fairly small but still 90% unused. Splitting and inlining these would show a big improvement.
Then we get a few render-blocking scripts in the head. These should be deferd to allow the server render to show before JS loads. The page is pretty much static so this shouldnʼt be too hard.
Their main JS loads at the end of the document, and itʼs a whopping 430k. This page barely needs any JS at all, so Iʼm pretty sure that can be significantly reduced, if not discarded. Thankfully it doesnʼt block render.
The caching headers are good, so very few requests are made on the second run. However, HTTP/1 slows down the initial request, then that massive script arrives from the cache and takes up 1.5s of main thread time. In total, it takes 6.2s to render.
The fixes from the first run will also fix these issues.
I took a look at the main image. Itʼs a wide image, but the sides arenʼt shown on mobile. Also, it has a dark overlay over the top, which reduces the fidelity of the image. I cropped the image a bit, and applied the overlaid colour to the image, so the compressor could optimise for it. Using Squoosh (drink!), I got the size from 267k to 48k as a JPEG, and 34k as WebP. Since text is rendered over the top, it might be reasonable to compress it more. See the results.
* Mercedes 19.5s * Ferrari 46.1s * Red Bull 15.8s * Renault 32.4s * Haas 12.5s * McLaren 40.7s * Racing Point 84.2s * Alfa Romeo 20.1s * Toro Rosso 12.8s * Williams 14.1s
Despite issues, this is one of the fastest times. With HTTP/2, it might have jumped into 1st place.
* good HTTPS * good Gzip * good Minification * good Caching * good Preloading fonts * bad HTTP/1 * bad Render-blocking scripts * bad Image compression * bad Main thread lock-up * bad Unnecessary CSS * bad Unnecessary JS
Iʼm going to throw the official fantasy F1 site into the mix too. Itʼs my blog I can do whatever I want.
Website. WebPageTest results.
The main thread looks blocked until at least the 15s mark.
This is the first site weʼve seen with a near-empty
, and also the first to use one of the modern frameworks (Ember).
Initial render is blocked on CSS and render-blocking scripts.
The page would benefit massively from a server render. But, given there isnʼt really any interactivity on this page, they could consider removing the JS all together, or preloading it for when it is needed. It certainly doesnʼt need 700k of JS. Netflix removed React from their landing page and saw a huge speed boost. The same could be done here.
The CSS is 95% unused, so it could be split up so this page can use the necessary parts.
As with other pages, this page would benefit hugely from font preloading. Also some of the fonts are TTFs. Theyʼre gzipped, but woff2 would be much smaller.
The JS URLs look versioned, but their Cache-Control header requires the browser to check for an update every time. You can see that in the 304 responses in the waterfall.
In terms of images, I took a look at the main background image at the top of the page. The team have clearly tried to make this small, because the JPEG compression is pretty visible. However, it has the same problem as the Williams image – most of it isnʼt visible on mobile, and it has an overlay.
I baked the overlay into the image, cropped it a bit, and took it down from 170k down to 54k as a JPEG, and 43k as WebP. It might be possible to go lower, but I was already dealing with an image with a lot of compression artefacts. See the result.
Thereʼs also a 219k PNG. I think itʼs a PNG because it has transparency, but itʼs only ever displayed over a white background. I gave it a solid background, and took it down to 38k as a JPEG, and 20k as WebP. See the result.
Another image caught my eye. Thereʼs an image of the Spanish flag as an SVG (29k), which can be optimised via SVGOMG to 15k. However, the Spanish flag is pretty intricate, and displayed really tiny on the site (30x22). I got it down to 1.2k as a PNG, and 800b as WebP. See the result.
Although SVG is usually the best choice for logos, sometimes a PNG/WebP is better.
* good HTTPS * good HTTP/2 * good Gzip * good Minification * bad Render-blocking scripts * bad Main thread lock-up * bad Image compression * bad Unnecessary CSS * bad Unnecessary JS * bad Poor caching * bad Late-loading fonts * bad Late-loading images
And thatʼs the last site. How do the all compare?
Ok here we go!
* Mercedes 19.5s * Ferrari 46.1s * Red Bull 15.8s * Renault 32.4s * Haas 12.5s * McLaren 40.7s * Racing Point 84.2s * Alfa Romeo 20.1s * Toro Rosso 12.8s * Williams 14.1s * Fantasy F1 22.0s
Congratulations to the Haas team who cross the finishing line first, and have the fastest website in Formula One. Toro Rosso are really close, and Williams complete the podium.
This has been kinda eye-opening for me. In my job, I feel I spend a lot of time trying to convince users of frameworks to get a meaningful static/server render as part of their build process early on. It has a huge impact on performance, it gives you the option of dropping the client-side part of the framework for particular pages, and itʼs incredibly hard to add later. However, none of the teams used any of the big modern frameworks. Theyʼre mostly Wordpress & Drupal, with a lot of jQuery. It makes me feel like Iʼve been in a bubble in terms of the technologies that make up the bulk of the web.
Itʼs great to see HTTPS, HTTP/2, gzip, minification, and decent caching widely used. These are things folks in the performance community have been pushing for a long time, and it seems like itʼs paid off.
However, one of the bits of performance advice weʼve been pushing the longest is "donʼt use render-blocking scripts", yet every site has them. Most of the sites have some sort of server/static render, but the browser canʼt display it due to these blocking scripts.
Pretty much every site would massively benefit from font preloading. Although Iʼm not suprised itʼs missing from most sites, as preloading hasnʼt been in browsers as long as other performance primitives. I guess code splitting is newish too, which is why there isnʼt a lot of it.
It feels like the quick-wins would cut the times down by 25%, if not more. An unblocked render would bring many down to the 5 second mark.
Some of the issues feel like they may have happened after the site was launched. Eg, a too-big image was uploaded, or a massive horse was added to the header. RUM metrics and build reporting would have helped here – developers would have seen performance crashing as the result of a particular change.
They arenʼt totally comparable, but this page (which is kinda huge and has a lot of images) would score 4.6s on the same test (results). Squoosh would score 4.2s (results), but we spent a lot of time on performance there. Most of this is down to an unblocked initial render and code splitting.
Anyway, does this mean Haas are going to win the 2019 Formula One season? Probably. I mean, it makes sense right?
HackerNewsBot debug: Calculated post rank: 104 - Loop: 262 - Rank min: 100 - Author rank: 51
Enjoy your meal!
If you like this picture please give me a “like” let me know your feedback, or leave your Comments/suggestions below
check out my Instagram @picoftasty more surprise there!
From now on, Every Saturday (Central Time - US & Canada) I will release Lightroom preset for people who are interested in food photography check out my website and download it for free. This is a great opportunity to practice and take your images to the next level.
- Now subscribe, get the latest release first
Location: Mae, Winnipeg , Canada
Full image: Link
#photography #CC0 #Unsplash #APIRandom #Enjoy #your #meal
#If #you #like #this #picture #please #give #me #a #“like” #let #me #know #your #feedback #or #leave #your #Comments/suggestions #below #
#check #out #my #Instagram #@picoftasty #more #surprise #there #
#From #now #on #Every #Saturday #Central #Time #- #US # #Canada #I #will #release #Lightroom #preset #for #people #who #are #interested #in #food #photography #check #out #my #website #and #download #it #for #free #This #is #a #great #opportunity #to #practice #and #take #your #images #to #the #next #level
#- #Now #subscribe #get #the #latest #release #first #Mae #Winnipeg #Canada
Es ist März und die Temperaturen werden langsam wieder angenehmer. So startet jetzt auch meine Vorbereitung für die nächste Ausstellung... das Wild At Art 2019. Das diesjährige Wild At Art findet am Wochenende vom 15. und 16. Juni in der Wilden 13[...]
#photography #website #exhibition #Wilde13 #WildAtArt #Würzburg #Polaroids
Wait, what do you mean make my app/site work in China? I don’t have to do anything to make my app work in the US or Singapore or Kenya or anywhere else, and I didn’t make the Chinese government angry,…
Article word count: 914
HN Discussion: https://news.ycombinator.com/item?id=19457709
Posted by chanind (karma: 122)
Post stats: Points: 132 - Comments: 74 - 2019-03-21T21:48:27Z
#HackerNews #app #china #want #website #work #you #your
Wait, what do you mean make my app/site work in China? I don’t have to do anything to make my app work in the US or Singapore or Kenya or anywhere else, and I didn’t make the Chinese government angry, so it should just work in China, right? Sadly, it’s not so simple. If your app/website servers aren’t hosted from within China, then, for all intents and purposes, it’s blocked. I mean, it will probably technically load, but will be excruciatingly, unusably slow. And sometimes it will just not load at all for hours at a time. This is true for all services hosted outside of the firewall, even in Hong Kong.
Any time a request needs to go from within China to the outside world, or from the outside world into China, the request crosses the Chinese Great Firewall. When this happens, there’s a lot of latency that gets added, and there’s a high chance the request will randomly fail. Requests through the firewall may appear to work most of the time, but then suddenly get fully blocked for several hours. The firewall doesn’t seem like it’s implemented uniformly across China either, so it’s possible that if you test in Shanghai your request may go through but a user in Changsha will have their requests blocked.
Basically, if requests need to pass through the firewall to reach your servers outside of China you’re in for a bad time.
If you want to have any infrastructure working in China, you need to apply for an ICP license from the Chinese government. All the techniques below require that you have this license. It’s quite a pain to apply for, and takes several months, but there’s no way around it. You can find more info about registering for an ICP license here. Alicloud also has a lot of info on registering for an ICP here.
Option 1: Cloudflare China Acceleration
The easiest option to get your services working in China is to go use Cloudflare’s China acceleration. Cloudflare partnered with Baidu to extend their acceleration network with points inside of China itself. Going through this method allows requests into and out of China to bypass the firewall, so your service will be fast. Cloudflare China acceleration requires an enterprise account though, so it’s going to be pricey.
Using Cloudflare does effectively allow you to host your infrastructure outside of China, but depending on your business it might not be entirely legal. That’s because China has strict data protection laws, and in many cases you must store Chinese users’ data inside of China. If you’re not a huge company or don’t have much sensitive data on Chinese users this may not be an issue, but it’s something to be aware of.
If most of your customers are outside of China and you just want to make sure your app/website loads quickly in China, then this is likely the best option for you.
Option 2: Make a Separate Chinese Version of your App/Website
The most direct way to make your app/website work in China is, of course, to host your servers themselves in China. You can do that using a Chinese cloud provider like Alicloud or Tencent cloud, or using the AWS China region. If you use AWS, you should be aware that the China region requires setting up a different account, and isn’t even run by Amazon!
The most technically correct way to be in compliance with the Chinese government’s data protection laws is to have a separate Chinese version of your app/website and run a separate version of your infrastructure in China. This allows all data for your Chinese users to stay in China and not be transferred abroad. No requests ever have to cross the firewall, so everything remains fast. Of course, it’s practically quite annoying to run 2 separate but identical versions of your infrastructure and apps.
Option 3: Proxy Requests on a Chinese Cloud Provider
Chinese cloud providers like Alicloud and Tencent cloud have fast connections between their datacenters through the firewall that you can make use of. You can create a VPC inside of China and a VPC outside of China and then connect them using a form of VPC peering. This gives you a high-speed connection through the firewall which you can use to proxy requests. If you host your main infrastructure in a Chinese region then you’ll be in compliance with China’s data protection laws, while still being able to serve requests outside of China via the proxied connection.
Option 4: Use a Chinese Cloud Provider Acceleration Service
If you’re hosting your infrastructure on Alicloud or Tencent cloud in China, you can accelerate requests to your infrastructure globally using their acceleration services. These work similarly to the Cloudflare option above, but in reverse. Alicloud calls their service Global Acceleration, and Tencent cloud calls their GAAP. This allows users globally to make requests to your servers in China and still have them be fast.
3rd Party Services
No matter which option you go with, you still need to test that your service is working in China. Even if you’re running on Chinese infrastructure or using Cloudflare China acceleration you may still be relying on APIs that aren’t supported in China, like Facebook Login or Google Recaptcha. If your server in China needs to make API calls to services that aren’t optimized in China you may find that a lot of those requests fail as well.
HackerNewsBot debug: Calculated post rank: 112 - Loop: 155 - Rank min: 100 - Author rank: 71
After several days of #work, I've narrowed my #collection down to a handful of good #pictures, uploaded them #online and added them to my #website. (https://pravik.xyz)
Due to a quirk in my #student VISA, I cannot #sell the #prints myself, but you can #buy them (and other related products) from my stores on:
- Redbubble (https://www.redbubble.com/people/pravik)
- Society6 (https://society6.com/praviksingh)
Any #support is appeciated, if you're not personally interested in these types of things (which is understandable) I would be really glad if you were to notify someone who is...
I'll also be leaving a small comment under each item individually over the next few days... just a little heads up... (\^u^)
#diaspora #mywork #project #picture #photo #photograph #photography ❤
HN Discussion: https://news.ycombinator.com/item?id=19399576
Posted by ccnafr (karma: 1869)
Post stats: Points: 182 - Comments: 43 - 2019-03-15T13:41:14Z
#HackerNews #crime #germany #make #node #run #tor #website
HackerNewsBot debug: Calculated post rank: 135 - Loop: 78 - Rank min: 100 - Author rank: 55
A robot for every desk and every home
Article word count: 181
HN Discussion: https://news.ycombinator.com/item?id=19336077
Posted by zerzeru (karma: 73)
Post stats: Points: 129 - Comments: 43 - 2019-03-08T08:53:17Z
#HackerNews #and #enthusiast #robot #show #tech #this #website
The Micro:bit is the perfect way to start kids coding also it’s a very capable boards for makers...
The Micro:bit board is getting popular beetween makers because you can build advanced robots like Lobot Micro:bit self...
Nybble is a DIY robotic kitten built with simple electronic components and your creativity. Nybble can create complex...
Plen2 is an advanced humanoid robot capable of complex movements and actions made in Japan. It has more...
Cubee is a cute baby robot that dance, tell stories and play music! You can move it around ...
The two Hungry Bunnies Chewy and Shreddy are the two robot rabbits robot star Toy Fair 2019 from...
We ‘re proud to show our robot Toplist for 2019 for February ! First place? Anki Vector still...
BOB the biped robot is the first robot experiment of building a 3D printed biped robot based on...
Zowi you can code your own robot, upgrade it and explore the robotic world! Zowi is a robot...
JIM is a biped robot based on Arduino Board , son of ZOWI, son of BOB, a biped...
HackerNewsBot debug: Calculated post rank: 100 - Loop: 305 - Rank min: 100 - Author rank: 60
Wer heute einen Blick in das Photodarium (Private Edition) wirft, sieht, das heute Tag des Hintern ist... auf jeden Fall erkläre ich ihn hiermit dazu... 😀 [caption id="attachment_2519" align="aligncenter" width="1350"] Tag des Hintern... auf dem Photodarium Private 2019[/caption] Tag des Hintern am[...]
#photography #website #Photodarium #Hintern #instantphotography
Short version: I discovered a bug that would let any web page identify a logged in FB user by confirming their ID. Facebook fixed in 6-9 months and rewarded a $1000 bounty. In last years coverage of…
Article word count: 1062
HN Discussion: https://news.ycombinator.com/item?id=19306309
Posted by TomAnthony (karma: 2681)
Post stats: Points: 140 - Comments: 30 - 2019-03-04T22:34:23Z
#HackerNews #confirm #exploit #facebook #identities #visitor #website
Short version: I discovered a bug that would let any web page identify a logged in FB user by confirming their ID. Facebook fixed in 6-9 months and rewarded a $1000 bounty.
In last years coverage of the Facebook / Cambridge Analytica privacy concerns, Mark Zuckerberg was asked to testify before Congress, and one of the questions they asked was around whether Facebook could track users even on other websites. There was a lot of news coverage around this aspect of Facebook, and a lot of people were up in arms. As one aspect of their response, Facebook launched a Data Abuse Bounty, with the aim of protecting user data from abuse.
So, having recently found a bug in Google’s search engine, I set out to see whether I could track or identify Facebook users when they were on other sites. After a few false starts, I managed to find a bug which allows me to identify whether a visitor is logged in to a specific Facebook account, and can check hundreds of identities per second (in the range of 500 p/s).
I have created a proof of concept of the attack (now fixed), which checks both a small known list of IDs but also allows you to enter an ID and it will confirm whether you are logged in to that account or not.
Facebook has a lot of backend endpoints which are used for various AJAX requests across the site. They are almost all are protected by access-control-allow-origin headers and magic prefixes on JSON responses that prevent JSON hijacking and other nasty attacks.
I searched across the site looking for any endpoints that didn’t have these protections and which did pass my user id in the URL, looking for any way I may be able to parse a response from Facebook to confirm whether the UID in the URL was correct.
I also looked for any images that include the user ID in the URL and behave differently when the UID matches the logged in user (so I could do something similar to this method, but for specific IDs); the closest I got was an image that did behave differently but the URL also included Facebook’s well known fb_dtsg parameter that is unique for users (and changes regularly) which prevented it being abused.
In addition I checked for any 301/302s in these URLs which might represent an opportunity to redirect to an image in a fashion would allow the same trick as above.
After carefully checking dozens of these endpoints I eventually found one that had a slight inconsistency in how it behaved which was a small gap but represented a weakness; it did have an access-control-allow-origin header, but it only included a magic prefix when the user ID (in the __user URL parameter) didn’t match, not when it did match. When the user ID provided in the URL did match the response was pure JSON.
However, because of the pesky access-control-allow-origin header, I couldn’t call this via an XHR request as the browser would block it. At this point I thought it may be another dead end, but I eventually realised what I could do is use it as the src for a normal block; this would of course fail but importantly it fails in a different way in both the cases (due also to the content-type header), and in such a fashion that this can be detected via onload and onerror event handlers.
Here is an example of the URL for the endpoint:
I have created a small demo which demonstrates the attack. It checks a small list of known user IDs automatically when you arrive on the page, and also allows you to enter an ID on the page and will confirm whether you are logged in to that account.
This is limited in that you need to be checking against a known list of users, rather than just being able to determine the user’s identity automatically. However, anyone affected by the Cambridge Analytica data situation whose data is already known, they would now be able to be identified and tracked across websites even without using any Facebook APIs.
In addition, the most sinister exploiters (e.g. a repressive regime) of such a bug would likely have a list of people they cared about identifying (which they could also narrow down based on your location and other factors). A final example might be anyone on a corporate IP address or network, where the list of users is probably fair easy to harvest and is fairly finite.
So the scope is fairly narrow, the impact on many may be small, but for some that impact could be high. This would certainly be a violation of privacy for any Facebook user who did get identified.
Disclosure Timeline * 20th April 2018 – I filed the initial bug report. * 20th April 2018 – Facebook replied letting me know this was being handed to the correct team to investigate. * 1st May 2018 – I requested an update. * 2nd May 2018 – FB replied – still investigating. * 23rd May 2018 – I requested an update, noticing it was fixed in Chrome but not Safari. * 23rd May 2018 – FB replied – they were investigating solutions. * 20th June 2018 – FB awarded a $1000 bounty. * 1st October 2018 – I requested permission to publish. * 1st October 2018 – FB replied they were still working on the fix, and they’d update me. * 19th February 2019 – I followed up and FB seemed happy for me to publish.
(It is unclear when the final fix rolled – it looks like 6-9 months after I reported it.)
HackerNewsBot debug: Calculated post rank: 103 - Loop: 191 - Rank min: 100 - Author rank: 72
The Federal Trade Commission today announced its first case challenging a marketer’s use of fake paid reviews on an independent retail website.
Article word count: 647
HN Discussion: https://news.ycombinator.com/item?id=19257358
Posted by minimaxir (karma: 33648)
Post stats: Points: 140 - Comments: 73 - 2019-02-26T19:11:44Z
#HackerNews #brings #case #challenging #fake #first #ftc #indie #paid #retail #reviews #website
The Federal Trade Commission today announced its first case challenging a marketer’s use of fake paid reviews on an independent retail website. In settling the agency’s complaint, Cure Encapsulations, Inc. and its owner, Naftula Jacobowitz, resolved allegations that they made false and unsubstantiated claims for their garcinia cambogia weight-loss supplement and that they paid a third-party website to write and post fake reviews on Amazon.com.
“People rely on reviews when they’re shopping online,” said Andrew Smith, Director of the FTC’s Bureau of Consumer Protection. “When a company buys fake reviews to inflate its Amazon ratings, it hurts both shoppers and companies that play by the rules.”
What the FTC Did to Protect Consumers
According to the FTC’s complaint, the defendants advertised and sold “Quality Encapsulations Garcinia Cambogia Extract with HCA” capsules on Amazon.com as an appetite-suppressing, fat-blocking, weight-loss pill.
The FTC alleges that the defendants paid a website, amazonverifiedreviews.com, to create and post Amazon reviews of their product. The FTC contends that Jacobowitz told the website’s operator that his product needed to have an average rating of 4.3 out of 5 stars in order to have sales and to, “Please make my product … stay a five star.”
As described in the FTC’s complaint, the reviews the defendants bought were posted on Amazon.com and gave the product a five-star rating. The complaint charges the defendants with representing that the purchased Amazon reviews were truthful reviews written by actual purchasers, when in reality they were fabricated.
The FTC’s complaint also alleges that the defendants made false and unsubstantiated claims on their Amazon product page, including through the purchased reviews, that their garcinia cambogia product is a “powerful appetite suppressant,” “Literally BLOCKS FAT From Forming,” causes significant weight loss, including as much as twenty pounds, and causes rapid and substantial weight loss, including as much as two or more pounds per week.
What the Settlement Means
The proposed court order settling the FTC’s complaint prohibits the defendants from making weight-loss, appetite-suppression, fat-blocking, or disease-treatment claims for any dietary supplement, food, or drug unless they have competent and reliable scientific evidence in the form of human clinical testing supporting the claims.
The order also requires the defendants to have competent and reliable scientific evidence to support any other claims about the health benefits or efficacy of such products. In addition, it prohibits them from making misrepresentations regarding endorsements, including that an endorsement is truthful or by an actual user.
The order next requires the defendants to email notices to consumers who bought Quality Encapsulations Garcinia Cambogia capsules detailing the FTC’s allegations regarding their efficacy claims. In addition, the order requires the defendants to notify Amazon, Inc. that they purchased Amazon reviews of their Quality Encapsulations Garcinia Cambogia capsules and to identify to Amazon the purchased reviews.
Finally, the order imposes a judgment of $12.8 million, which will be suspended upon payment of $50,000 to the Commission and the payment of certain unpaid income tax obligations. If the defendants are later found to have misrepresented their financial condition to the FTC, the full amount of the judgment will immediately become due.
The Commission vote authorizing the staff to file the complaint and proposed stipulated final order was 5-0. The FTC filed the complaint and proposed order in the U.S. District Court for the Eastern District of New York.
NOTE: The Commission files a complaint when it has “reason to believe” that the law has been or is being violated and it appears to the Commission that a proceeding is in the public interest. Stipulated final injunctions/orders have the force of law when approved and signed by the District Court judge.
The Federal Trade Commission works to promote competition and protect and educate consumers. You can learn more about consumer topics and file a consumer complaint online or by calling 1-877-FTC-HELP (382-4357). Like the FTC on Facebook, follow us on Twitter, read our blogs, and subscribe to press releases for the latest FTC news and resources.
HackerNewsBot debug: Calculated post rank: 117 - Loop: 169 - Rank min: 100 - Author rank: 55
Degenerates -- Art Without Copyright!
I have found this amazing #lightweight #website of an #art group/project called The #Degenerates. They share all their art in the #publicdomain under #CC0!
One of the member projects is the 10kb art gallery -- art that fits in 10 kilobytes, and is public domain as well.
I am very happy to have found this because the spirit goes exactly along the lines of what I imagine the web should be -- simplicity and efficiency of both the technology and sharing. Check it out 😀
Unterstütze Sibel Schick
Sie macht sehr schöne #Videos und braucht #Geld.
Hier das hervorragende Video "Kein Ayran für #Nazis"
"Unterstütze meine #Arbeit mit einer dauerhaften Spende!
Ich bin eine freie #Autorin. Über meine #Website, meinen Twitteraccount und YouTube-Kanal stelle ich Inhalte zur Verfügung, zu denen alle kostenlosen Zugang haben. In meinen Texten, Kommentaren, Kurztexten, Blogbeiträgen und Videos geht es um gesellschaftliche Themen aus einer antirassistischen und feministischen Perspektive. Mein Ziel ist marginalisierte #Menschen zu empowern.
Auch wenn meine Videos, Blogbeiträge und Kurztexte kostenlos abrufbar sind, kostet die Herstellung dieser Zeit und Geld. Für die #Produktion brauche ich unter anderem ein gutes #Handy, eine #Videokamera, unterschiedliche #Mikros für verschiedene Zwecke, #Tageslichtlampen und anderes Zubehör sowie #Stativ etc., für die bezahlt werden muss. Zudem fange ich dieses Jahr (2019) endlich an, ein #Buch zu schreiben. Da diese eine zeitintensive Arbeit ist, wird es mir nicht gelingen währenddessen Vollzeit zu arbeiten.
Falls du etwas übrig hast und Lust, mich und meine Arbeit dauerhaft zu unterstützen, würde ich mich sehr freuen! Ich werde regelmäßig Informationen herausschicken und dich auf dem Laufenden halten.
Meine Arbeit kannst du dir über den folgenden Link anschauen: sibelschick.net"
Und hier der #Patreon - Link mit coolen Beispielvideos: https://www.patreon.com/sibel
#texte #blogging #patreon #taz
The ELECTRIC UNIVERSE®.... W. THORNHILL's Website.
Astronomy is stuck in the gas-light era,unable to see that stars are simply electric lights strung along invisible cosmic power lines that are detectable by their magnetic fields and radio noise.
It is now a century since the Norwegian genius Kristian Birkeland proved that the phenomenal ‘northern lights’ or aurora borealis is an earthly connection with the electrical Sun. Later, Hannes Alfvén the Swedish Nobel Prize winning physicist, with a background in electrical engineering and experience of the northern lights, drew the solar circuit. It is no coincidence that Scandinavian scientists led the way in showing that we live in an ELECTRIC UNIVERSE®.
Why have they been ignored? The answer may be found in the inertia of prior beliefs and the failure of our educational institutions. We humans are better storytellers than scientists. We see the universe through the filter of tales we are told in childhood and our education systems reward those who can best repeat them. Dissent is discouraged so that many of the brightest intellects become bored and drop out. The history of science is sanitized to ignore the great controversies of the past, which were generally ‘won’ by a vote instead of reasoned debate. Today NASA does science by press release and investigative journalism is severely inhibited. And narrow experts who never left school do their glossy media ‘show and tell,’ keeping the public in the dark in this ‘dark age’ of science. It is often said, “extraordinary claims require extraordinary proof.” History shows otherwise that entrenched paradigms resist extraordinary disproof.
This website is for the curious, those who are eager to discover some reasonable answers about life, the universe and everything (as far as it is possible today) free of old beliefs that have shackled progress for centuries. It requires a beginner’s mind and a broad forensic approach to knowledge that is not taught in any university. The payoff is the spark that lights up lives.”
#THORNHILL #WEBSITE #HOLOSCIENCE #SCIENCE #ASTROPHYSICS #PHYSICS #COSMOLOGY #PHILOSOPHY #BIOLOGY #CHEMISTRY #GEOLOGY #ELECTRIC UNIVERSE THEORY
Article word count: 16
HN Discussion: https://news.ycombinator.com/item?id=19038092
Posted by ingve (karma: 96887)
Post stats: Points: 96 - Comments: 120 - 2019-01-30T18:59:54Z
HackerNewsBot debug: Calculated post rank: 104 - Loop: 359 - Rank min: 100 - Author rank: 129