Items tagged with: make
HN Discussion: https://news.ycombinator.com/item?id=19727156
Posted by _davebennett (karma: 164)
Post stats: Points: 111 - Comments: 86 - 2019-04-23T10:27:30Z
#HackerNews #anymore; #anything #attempt #cant #every #for #fun #hobby #make #money
HackerNewsBot debug: Calculated post rank: 102 - Loop: 70 - Rank min: 80 - Author rank: 86
Facebook has developed a plan to turn its users into the stars of advertising campaigns through new technology which can automatically scan people’s photographs and identify which products are…
Article word count: 118
HN Discussion: https://news.ycombinator.com/item?id=19645531
Posted by ColinWright (karma: 92619)
Post stats: Points: 139 - Comments: 76 - 2019-04-12T15:17:53Z
#HackerNews #ads #advertisers #make #online #pass #photos #plans #stars #the #users #your
Facebook has developed a plan to turn its users into the stars of advertising campaigns through new technology which can automatically scan people’s photographs and identify which products are featured in them.
The social network was granted a patent in the US for a system which can detect photos people have uploaded that feature items such as alcoholic drinks and snacks. The company would then pass those images to the brands, which would turn them into adverts for other Facebook users to see.
One example given in the patent filing is a Facebook user uploading a photograph of a party in which they are pictured holding a bottle of Grey Goose vodka. The social network could automatically detect...
HackerNewsBot debug: Calculated post rank: 118 - Loop: 325 - Rank min: 100 - Author rank: 69
A just-add-css collection of styles to make simple websites just a little nicer - kognise/water.css
Article word count: 34
HN Discussion: https://news.ycombinator.com/item?id=19593866
Posted by archmaster (karma: 123)
Post stats: Points: 223 - Comments: 35 - 2019-04-07T00:12:14Z
#HackerNews #beautiful #make #show #static #tiny #watercss #websites #your
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
HackerNewsBot debug: Calculated post rank: 160 - Loop: 113 - Rank min: 100 - Author rank: 102
Most online salary data is suspect due to self-reporting and selection bias. We aggregate figures from actual offers made to engineers on Triplebyte in real-time. Senior software engineer: $175,000.…
Article word count: 150
HN Discussion: https://news.ycombinator.com/item?id=19545555
Posted by Harj (karma: 4738)
Post stats: Points: 108 - Comments: 85 - 2019-04-01T17:10:35Z
#HackerNews #and #engineers #how #make #much #nyc #seattle #software
Weʼre showing base salary only. Companies also offer equity, annual bonus, signing bonus, relocation, and other benefits - which often add up to a substantial fraction of total compensation. However, itʼs disingenuous to compare these non-salary components on a single axis, and we advise candidates individually on these tradeoffs when comparing their offers.
Most online salary data is suspect due to self-reporting and selection bias. In contrast, we aggregate figures from all offers made to engineers on Triplebyte in real-time.
Engineers are at a fundamental disadvantage in salary and equity negotiations. They always know less than their hiring manager. We believe this is unfair.
Companies hiring through Triplebyte are incentivized to give competitive best offers because they know our engineers are likely to get many offers at the same time, and we provide one-on-one guidance to help you negotiate with multiple companies at once.
Data last updated: March 26, 2019
HackerNewsBot debug: Calculated post rank: 100 - Loop: 114 - Rank min: 100 - Author rank: 109
The software giant's president says vague new laws are damaging the Australian technology industry and causing customers to seek options in other countries.
Article word count: 532
HN Discussion: https://news.ycombinator.com/item?id=19505880
Posted by technion (karma: 2170)
Post stats: Points: 122 - Comments: 36 - 2019-03-27T20:49:18Z
#HackerNews #australia #companies #data #encryption #laws #make #microsoft #says #storing #wary
Updated March 28, 2019 07:18:51
Microsoft president Brad Smith has warned companies are no longer comfortable storing customer data in Australia after the introduction of controversial encryption laws.
* The Government last year passed laws to give intelligence agencies greater access to encrypted data * But the technology industry has described them as overreach that will undermine privacy * Brad Smith said it was in the Australian Governmentʼs interest to ease concerns about the legislation
Mr Smith told a Canberra audience the laws are too vague and are damaging the Australian technology industry and broader economy, as businesses raise concerns about privacy and look to overseas markets.
"When I travel to other countries I hear companies and governments say ʼwe are no longer comfortable putting our data in Australiaʼ, so they are asking us to build more data centres in other countries," Mr Smith said.
Late last year, with the support of the Opposition, the Coalition passed laws to give intelligence agencies greater access to encrypted messages sent by suspected criminals.
In some cases, these security agencies can demand companies build new capabilities to allow them to read the otherwise hidden messages.
The Federal Government argues these laws are crucial to combatting terrorism and serious crime, but the technology industry has described them as an overreach that will hurt the industry and undermine privacy.
Mr Smith said Australia had developed a reputation as a destination for companies to store customer data, although that been undermined in the past six months.
"We will have to sort through those issues but if I were an Australian who wanted to advance the Australian technology economy, I would want to address that and put the minds of other like-minded governments at ease," he said.
"It has not changed, to date, anything that we have had to do in Australia but we do worry about some areas of the law in terms of potential consequences."
Microsoft worried about privacy in Australia
Mr Smith said he did not believe the laws intended to create a so-called "backdoor" that would undermine encryption technology, but described the legislation as vague.
"There is this wonderful phrase about enabling companies to avoid creating a systemic weakness but that phrase is not defined," he said.
"Until it is defined I think people will worry and we will be among those who will worry because we do feel it is vitally important we protect our customerʼs privacy."
Mr Smith said it was in the interests of the Australian Government to ease concerns about the legislation or to amend it.
The Australian technology industry has this week renewed its calls for the laws to be amended before the election, arguing there should be more oversight and reduction in scope.
The Australian Signals Directorate (ASD) has rejected claims the laws give security agencies unfettered power, or that technology companies will be forced overseas.
"Australia is not the first country to enact this sort of legislation — and we will not be the last," ASD Director-General Mike Burgess said.
"Agencies in the UK already have similar powers and other nations are considering their options.
"The claims the legislation will drive tech companies offshore are similarly flawed."
Topics: government-and-politics, federal-government, computers-and-technology, internet-technology, information-technology, australia
First posted March 28, 2019 00:39:42
HackerNewsBot debug: Calculated post rank: 93 - Loop: 105 - Rank min: 80 - Author rank: 31
HN Discussion: https://news.ycombinator.com/item?id=19399576
Posted by ccnafr (karma: 1869)
Post stats: Points: 182 - Comments: 43 - 2019-03-15T13:41:14Z
#HackerNews #crime #germany #make #node #run #tor #website
HackerNewsBot debug: Calculated post rank: 135 - Loop: 78 - Rank min: 100 - Author rank: 55
Remote-desktop giant 'among more than 200 govt agencies, oil, gas, tech corps' hit by cyber-gang
Article word count: 671
HN Discussion: https://news.ycombinator.com/item?id=19349830
Posted by cow9 (karma: 256)
Post stats: Points: 134 - Comments: 36 - 2019-03-10T02:35:27Z
#HackerNews #6tb #biz #citrix #docs #emails #hackers #make #off #ransack #secrets #with
Updated Citrix today warned its customers that foreign hackers romped through its internal company network and stole corporate secrets.
The enterprise software giant – which services businesses, the American military, and various US government agencies – said it was told by the FBI on Wednesday that miscreants had accessed Citrixʼs IT systems and exfiltrated a significant amount of data.
According to infosec firm Resecurity, which had earlier alerted the Feds and Citrix to the cyber-intrusion, at least six terabytes of sensitive internal files were swiped from the US corporation by the Iranian-backed IRIDIUM hacker gang. The spies hit in December, and Monday this week, weʼre told, lifting emails, blueprints, and other documents, after bypassing multi-factor login systems and slipping into Citrixʼs VPNs.
"The incident has been identified as a part of a sophisticated cyberespionage campaign supported by nation-state due to strong targeting on government, military-industrial complex, energy companies, financial institutions and large enterprises involved in critical areas of economy," Team Resecurity said in a statement earlier today.
"Based our recent analysis, the threat actors leveraged a combination of tools, techniques and procedures, allowing them to conduct targeted network intrusion to access at least six terabytes of sensitive data stored in the Citrix enterprise network, including email correspondence, files in network shares, and other services used for project management and procurement."
LA-based Resecurity added that IRIDIUM "has hit more than 200 government agencies, oil and gas companies, and technology companies including Citrix."
Resecurity also said it warned Citrix on December 28 that the software giant had been turned over by the hacker crew during the Christmas period. Citrix, meanwhile, said it took action – launching an internal probe and securing its networks – after hearing from the FBI earlier this week.
Earlier today, Citrix chief information security officer Stan Black gave his companyʼs side of the story. He said that, as of right now, Citrix does not know exactly which documents the hackers obtained nor how they got in – the FBI thinks it was by brute-forcing weak passwords – nor for how long they may have been camping on the corporate network.
"While our investigation is ongoing, based on what we know to date, it appears that the hackers may have accessed and downloaded business documents," Black said. "The specific documents that may have been accessed, however, are currently unknown."
At this point, Citrix reckons the intrusion was limited to its corporate network, and thus believes customer records and data were not stolen nor touched.
Beyond that, however, itʼs anyoneʼs guess as to what exactly the hackers may have lifted. As a massive provider of remote management, networking, and videoconferencing products, Citrix has an extremely large portfolio spread across a number of sectors in the enterprise IT market. Its customers include the White House and the FBI, though itʼs not known at the moment whether the hack involved or menaced Uncle Samʼs operations directly.
citrix READ MORE
As the investigation is in its extremely early phases, Citrix said it will provide customers with regular updates as it gets more details. For now, Citrix said it is planning to cooperate fully with the FBI probe, and has also brought in an outside security firm to help investigate the intrusion and make sure that hackers will not be able to get back in to the network.
"Citrix is moving as quickly as possible, with the understanding that these investigations are complex, dynamic and require time to conduct properly," Black said.
"In investigations of cyber incidents, the details matter, and we are committed to communicating appropriately when we have what we believe is credible and actionable information." ®
Editorʼs note: This story was revised after publication to include Resecurityʼs version of events. A spokesperson for Citrix confirmed "Stan’s blog refers to the same incident" described by Resecurity, adding: "We have no further comment at this time, but as promised, we will provide updates when we have what we believe is credible and actionable information." Resecurity declined to comment further.
Sponsored: Becoming a Pragmatic Security Leader
HackerNewsBot debug: Calculated post rank: 101 - Loop: 79 - Rank min: 80 - Author rank: 77
...and why they make no sense Also see Xiph.Org's new video, Digital Show & Tell, for detailed demonstrations of digital sampling in action on real equipment! Articles last month revealed that…
Article word count: 6650
HN Discussion: https://news.ycombinator.com/item?id=19318898
Posted by zpiman (karma: 94)
Post stats: Points: 132 - Comments: 122 - 2019-03-06T14:00:25Z
#HackerNews #192 #2012 #and #downloads #make #music #sense #they #why
Fish Logo and Xiph.org
...and why they make no sense
Also see Xiph.Orgʼs new video, Digital Show & Tell, for detailed demonstrations of digital sampling in action on real equipment!
Articles last month revealed that musician Neil Young and Appleʼs Steve Jobs discussed offering digital music downloads of ʼuncompromised studio qualityʼ. Much of the press and user commentary was particularly enthusiastic about the prospect of uncompressed 24 bit 192kHz downloads. 24/192 featured prominently in my own conversations with Mr. Youngʼs group several months ago.
Unfortunately, there is no point to distributing music in 24-bit/192kHz format. Its playback fidelity is slightly inferior to 16/44.1 or 16/48, and it takes up 6 times the space.
There are a few real problems with the audio quality and ʼexperienceʼ of digitally distributed music today. 24/192 solves none of them. While everyone fixates on 24/192 as a magic bullet, weʼre not going to see any actual improvement.
In the past few weeks, Iʼve had conversations with intelligent, scientifically minded individuals who believe in 24/192 downloads and want to know how anyone could possibly disagree. They asked good questions that deserve detailed answers.
I was also interested in what motivated high-rate digital audio advocacy. Responses indicate that few people understand basic signal theory or the sampling theorem, which is hardly surprising. Misunderstandings of the mathematics, technology, and physiology arose in most of the conversations, often asserted by professionals who otherwise possessed significant audio expertise. Some even argued that the sampling theorem doesnʼt really explain how digital audio actually works [1].
Misinformation and superstition only serve charlatans. So, letʼs cover some of the basics of why 24/192 distribution makes no sense before suggesting some improvements that actually do.
The ear hears via hair cells that sit on the resonant basilar membrane in the cochlea. Each hair cell is effectively tuned to a narrow frequency band determined by its position on the membrane. Sensitivity peaks in the middle of the band and falls off to either side in a lopsided cone shape overlapping the bands of other nearby hair cells. A sound is inaudible if there are no hair cells tuned to hear it.
Above left: anatomical cutaway drawing of a human cochlea with the basilar membrane colored in beige. The membrane is tuned to resonate at different frequencies along its length, with higher frequencies near the base and lower frequencies at the apex. Approximate locations of several frequencies are marked.
Above right: schematic diagram representing hair cell response along the basilar membrane as a bank of overlapping filters.
This is similar to an analog radio that picks up the frequency of a strong station near where the tuner is actually set. The farther off the stationʼs frequency is, the weaker and more distorted it gets until it disappears completely, no matter how strong. There is an upper (and lower) audible frequency limit, past which the sensitivity of the last hair cells drops to zero, and hearing ends.
Iʼm sure youʼve heard this many, many times: The human hearing range spans 20Hz to 20kHz. Itʼs important to know how researchers arrive at those specific numbers.
First, we measure the ʼabsolute threshold of hearingʼ across the entire audio range for a group of listeners. This gives us a curve representing the very quietest sound the human ear can perceive for any given frequency as measured in ideal circumstances on healthy ears. Anechoic surroundings, precision calibrated playback equipment, and rigorous statistical analysis are the easy part. Ears and auditory concentration both fatigue quickly, so testing must be done when a listener is fresh. That means lots of breaks and pauses. Testing takes anywhere from many hours to many days depending on the methodology.
Then we collect data for the opposite extreme, the ʼthreshold of painʼ. This is the point where the audio amplitude is so high that the earʼs physical and neural hardware is not only completely overwhelmed by the input, but experiences physical pain. Collecting this data is trickier. You donʼt want to permanently damage anyoneʼs hearing in the process.
Above: Approximate equal loudness curves derived from Fletcher and Munson (1933) plus modern sources for frequencies > 16kHz. The absolute threshold of hearing and threshold of pain curves are marked in red. Subsequent researchers refined these readings, culminating in the Phon scale and the ISO 226 standard equal loudness curves. Modern data indicates that the ear is significantly less sensitive to low frequencies than Fletcher and Munsonʼs results.
The upper limit of the human audio range is defined to be where the absolute threshold of hearing curve crosses the threshold of pain. To even faintly perceive the audio at that point (or beyond), it must simultaneously be unbearably loud.
At low frequencies, the cochlea works like a bass reflex cabinet. The helicotrema is an opening at the apex of the basilar membrane that acts as a port tuned to somewhere between 40Hz to 65Hz depending on the individual. Response rolls off steeply below this frequency.
Thus, 20Hz - 20kHz is a generous range. It thoroughly covers the audible spectrum, an assertion backed by nearly a century of experimental data.
Based on my correspondences, many people believe in individuals with extraordinary gifts of hearing. Do such ʼgolden earsʼ really exist?
It depends on what you call a golden ear.
Young, healthy ears hear better than old or damaged ears. Some people are exceptionally well trained to hear nuances in sound and music most people donʼt even know exist. There was a time in the 1990s when I could identify every major mp3 encoder by sound (back when they were all pretty bad), and could demonstrate this reliably in double-blind testing [2].
When healthy ears combine with highly trained discrimination abilities, I would call that person a golden ear. Even so, below-average hearing can also be trained to notice details that escape untrained listeners. Golden ears are more about training than hearing beyond the physical ability of average mortals.
Auditory researchers would love to find, test, and document individuals with truly exceptional hearing, such as a greatly extended hearing range. Normal people are nice and all, but everyone wants to find a genetic freak for a really juicy paper. We havenʼt found any such people in the past 100 years of testing, so they probably donʼt exist. Sorry. Weʼll keep looking.
Perhaps youʼre skeptical about everything Iʼve just written; it certainly goes against most marketing material. Instead, letʼs consider a hypothetical Wide Spectrum Video craze that doesnʼt carry preexisting audiophile baggage.
Above: The approximate log scale response of the human eyeʼs rods and cones, superimposed on the visible spectrum. These sensory organs respond to light in overlapping spectral bands, just as the earʼs hair cells are tuned to respond to overlapping bands of sound frequencies.
The human eye sees a limited range of frequencies of light, aka, the visible spectrum. This is directly analogous to the audible spectrum of sound waves. Like the ear, the eye has sensory cells (rods and cones) that detect light in different but overlapping frequency bands.
The visible spectrum extends from about 400THz (deep red) to 850THz (deep violet) [3]. Perception falls off steeply at the edges. Beyond these approximate limits, the light power needed for the slightest perception can fry your retinas. Thus, this is a generous span even for young, healthy, genetically gifted individuals, analogous to the generous limits of the audible spectrum.
In our hypothetical Wide Spectrum Video craze, consider a fervent group of Spectrophiles who believe these limits arenʼt generous enough. They propose that video represent not only the visible spectrum, but also infrared and ultraviolet. Continuing the comparison, thereʼs an even more hardcore [and proud of it!] faction that insists this expanded range is yet insufficient, and that video feels so much more natural when it also includes microwaves and some of the X-ray spectrum. To a Golden Eye, they insist, the difference is night and day!
Of course this is ludicrous.
No one can see X-rays (or infrared, or ultraviolet, or microwaves). It doesnʼt matter how much a person believes he can. Retinas simply donʼt have the sensory hardware.
Hereʼs an experiment anyone can do: Go get your Apple IR remote. The LED emits at 980nm, or about 306THz, in the near-IR spectrum. This is not far outside of the visible range. Take the remote into the basement, or the darkest room in your house, in the middle of the night, with the lights off. Let your eyes adjust to the blackness.
Above: Apple IR remote photographed using a digital camera. Though the emitter is quite bright and the frequency emitted is not far past the red portion of the visible spectrum, itʼs completely invisible to the eye.
Can you see the Apple Remoteʼs LED flash when you press a button [4]? No? Not even the tiniest amount? Try a few other IR remotes; many use an IR wavelength a bit closer to the visible band, around 310-350THz. You wonʼt be able to see them either. The rest emit right at the edge of visibility from 350-380 THz and may be just barely visible in complete blackness with dark-adjusted eyes [5]. All would be blindingly, painfully bright if they were well inside the visible spectrum.
These near-IR LEDs emit from the visible boundry to at most 20% beyond the visible frequency limit. 192kHz audio extends to 400% of the audible limit. Lest I be accused of comparing apples and oranges, auditory and visual perception drop off similarly toward the edges.
192kHz digital music files offer no benefits. Theyʼre not quite neutral either; practical fidelity is slightly worse. The ultrasonics are a liability during playback.
Neither audio transducers nor power amplifiers are free of distortion, and distortion tends to increase rapidly at the lowest and highest frequencies. If the same transducer reproduces ultrasonics along with audible content, any nonlinearity will shift some of the ultrasonic content down into the audible range as an uncontrolled spray of intermodulation distortion products covering the entire audible spectrum. Nonlinearity in a power amplifier will produce the same effect. The effect is very slight, but listening tests have confirmed that both effects can be audible.
Above: Illustration of distortion products resulting from intermodulation of a 30kHz and a 33kHz tone in a theoretical amplifier with a nonvarying total harmonic distortion (THD) of about .09%. Distortion products appear throughout the spectrum, including at frequencies lower than either tone.
Inaudible ultrasonics contribute to intermodulation distortion in the audible range (light blue area). Systems not designed to reproduce ultrasonics typically have much higher levels of distortion above 20kHz, further contributing to intermodulation. Widening a designʼs frequency range to account for ultrasonics requires compromises that decrease noise and distortion performance within the audible spectrum. Either way, unneccessary reproduction of ultrasonic content diminishes performance.
There are a few ways to avoid the extra distortion:
1. A dedicated ultrasonic-only speaker, amplifier, and crossover stage to separate and independently reproduce the ultrasonics you canʼt hear, just so they donʼt mess up the sounds you can.
2. Amplifiers and transducers designed for wider frequency reproduction, so ultrasonics donʼt cause audible intermodulation. Given equal expense and complexity, this additional frequency range must come at the cost of some performance reduction in the audible portion of the spectrum.
3. Speakers and amplifiers carefully designed not to reproduce ultrasonics anyway.
4. Not encoding such a wide frequency range to begin with. You canʼt and wonʼt have ultrasonic intermodulation distortion in the audible band if thereʼs no ultrasonic content.
They all amount to the same thing, but only 4) makes any sense.
If youʼre curious about the performance of your own system, the following samples contain a 30kHz and a 33kHz tone in a 24/96 WAV file, a longer version in a FLAC, some tri-tone warbles, and a normal song clip shifted up by 24kHz so that itʼs entirely in the ultrasonic range from 24kHz to 46kHz:
Assuming your system is actually capable of full 96kHz playback [6], the above files should be completely silent with no audible noises, tones, whistles, clicks, or other sounds. If you hear anything, your system has a nonlinearity causing audible intermodulation of the ultrasonics. Be careful when increasing volume; running into digital or analog clipping, even soft clipping, will suddenly cause loud intermodulation tones.
In summary, itʼs not certain that intermodulation from ultrasonics will be audible on a given system. The added distortion could be insignificant or it could be noticable. Either way, ultrasonic content is never a benefit, and on plenty of systems it will audibly hurt fidelity. On the systems it doesnʼt hurt, the cost and complexity of handling ultrasonics could have been saved, or spent on improved audible range performance instead.
Sampling theory is often unintuitive without a signal processing background. Itʼs not surprising most people, even brilliant PhDs in other fields, routinely misunderstand it. Itʼs also not surprising many people donʼt even realize they have it wrong.
Above: Sampled signals are often depicted as a rough stairstep (red) that seems a poor approximation of the original signal. However, the representation is mathematically exact and the signal recovers the exact smooth shape of the original (blue) when converted back to analog.
The most common misconception is that sampling is fundamentally rough and lossy. A sampled signal is often depicted as a jagged, hard-cornered stair-step facsimile of the original perfectly smooth waveform. If this is how you envision sampling working, you may believe that the faster the sampling rate (and more bits per sample), the finer the stair-step and the closer the approximation will be. The digital signal would sound closer and closer to the original analog signal as sampling rate approaches infinity.
Similarly, many non-DSP people would look at the following:
And say, "Ugh!" It might appear that a sampled signal represents higher frequency analog waveforms badly. Or, that as audio frequency increases, the sampled quality falls and frequency response falls off, or becomes sensitive to input phase.
Looks are deceiving. These beliefs are incorrect!
As a followup to all the mail I got about digital waveforms and stairsteps, I demonstrate actual digital behavior on real equipment in our video Digital Show & Tell so you need not simply take me at my word here!
All signals with content entirely below the Nyquist frequency (half the sampling rate) are captured perfectly and completely by sampling; an infinite sampling rate is not required. Sampling doesnʼt affect frequency response or phase. The analog signal can be reconstructed losslessly, smoothly, and with the exact timing of the original analog signal.
So the math is ideal, but what of real world complications? The most notorious is the band-limiting requirement. Signals with content over the Nyquist frequency must be lowpassed before sampling to avoid aliasing distortion; this analog lowpass is the infamous antialiasing filter. Antialiasing canʼt be ideal in practice, but modern techniques bring it very close. ...and with that we come to oversampling.
Sampling rates over 48kHz are irrelevant to high fidelity audio data, but they are internally essential to several modern digital audio techniques. Oversampling is the most relevant example [7].
Oversampling is simple and clever. You may recall from my A Digital Media Primer for Geeks that high sampling rates provide a great deal more space between the highest frequency audio we care about (20kHz) and the Nyquist frequency (half the sampling rate). This allows for simpler, smoother, more reliable analog anti-aliasing filters, and thus higher fidelity. This extra space between 20kHz and the Nyquist frequency is essentially just spectral padding for the analog filter.
Above: Whiteboard diagram from A Digital Media Primer for Geeks illustrating the transition band width available for a 48kHz ADC/DAC (left) and a 96kHz ADC/DAC (right).
Thatʼs only half the story. Because digital filters have few of the practical limitations of an analog filter, we can complete the anti-aliasing process with greater efficiency and precision digitally. The very high rate raw digital signal passes through a digital anti-aliasing filter, which has no trouble fitting a transition band into a tight space. After this further digital anti-aliasing, the extra padding samples are simply thrown away. Oversampled playback approximately works in reverse.
This means we can use low rate 44.1kHz or 48kHz audio with all the fidelity benefits of 192kHz or higher sampling (smooth frequency response, low aliasing) and none of the drawbacks (ultrasonics that cause intermodulation distortion, wasted space). Nearly all of todayʼs analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) oversample at very high rates. Few people realize this is happening because itʼs completely automatic and hidden.
ADCs and DACs didnʼt always transparently oversample. Thirty years ago, some recording consoles recorded at high sampling rates using only analog filters, and production and mastering simply used that high rate signal. The digital anti-aliasing and decimation steps (resampling to a lower rate for CDs or DAT) happened in the final stages of mastering. This may well be one of the early reasons 96kHz and 192kHz became associated with professional music production [8].
OK, so 192kHz music files make no sense. Covered, done. What about 16 bit vs. 24 bit audio?
Itʼs true that 16 bit linear PCM audio does not quite cover the entire theoretical dynamic range of the human ear in ideal conditions. Also, there are (and always will be) reasons to use more than 16 bits in recording and production.
None of that is relevant to playback; here 24 bit audio is as useless as 192kHz sampling. The good news is that at least 24 bit depth doesnʼt harm fidelity. It just doesnʼt help, and also wastes space.
Weʼve discussed the frequency range of the ear, but what about the dynamic range from the softest possible sound to the loudest possible sound?
One way to define absolute dynamic range would be to look again at the absolute threshold of hearing and threshold of pain curves. The distance between the highest point on the threshold of pain curve and the lowest point on the absolute threshold of hearing curve is about 140 decibels for a young, healthy listener. That wouldnʼt last long though; +130dB is loud enough to damage hearing permanently in seconds to minutes. For reference purposes, a jackhammer at one meter is only about 100-110dB.
The absolute threshold of hearing increases with age and hearing loss. Interestingly, the threshold of pain decreases with age rather than increasing. The hair cells of the cochlea themselves posses only a fraction of the earʼs 140dB range; musculature in the ear continuously adjust the amount of sound reaching the cochlea by shifting the ossicles, much as the iris regulates the amount of light entering the eye [9]. This mechanism stiffens with age, limiting the earʼs dynamic range and reducing the effectiveness of its protection mechanisms [10].
Few people realize how quiet the absolute threshold of hearing really is.
The very quietest perceptible sound is about -8dbSPL [11]. Using an A-weighted scale, the hum from a 100 watt incandescent light bulb one meter away is about 10dBSPL, so about 18dB louder. The bulb will be much louder on a dimmer.
20dBSPL (or 28dB louder than the quietest audible sound) is often quoted for an empty broadcasting/recording studio or sound isolation room. This is the baseline for an exceptionally quiet environment, and one reason youʼve probably never noticed hearing a light bulb.
16 bit linear PCM has a dynamic range of 96dB according to the most common definition, which calculates dynamic range as (6*bits)dB. Many believe that 16 bit audio cannot represent arbitrary sounds quieter than -96dB. This is incorrect.
I have linked to two 16 bit audio files here; one contains a 1kHz tone at 0 dB (where 0dB is the loudest possible tone) and the other a 1kHz tone at -105dB.
Above: Spectral analysis of a -105dB tone encoded as 16 bit / 48kHz PCM. 16 bit PCM is clearly deeper than 96dB, else a -105dB tone could not be represented, nor would it be audible.
How is it possible to encode this signal, encode it with no distortion, and encode it well above the noise floor, when its peak amplitude is one third of a bit?
Part of this puzzle is solved by proper dither, which renders quantization noise independent of the input signal. By implication, this means that dithered quantization introduces no distortion, just uncorrelated noise. That in turn implies that we can encode signals of arbitrary depth, even those with peak amplitudes much smaller than one bit [12]. However, dither doesnʼt change the fact that once a signal sinks below the noise floor, it should effectively disappear. How is the -105dB tone still clearly audible above a -96dB noise floor?
The answer: Our -96dB noise floor figure is effectively wrong; weʼre using an inappropriate definition of dynamic range. (6*bits)dB gives us the RMS noise of the entire broadband signal, but each hair cell in the ear is sensitive to only a narrow fraction of the total bandwidth. As each hair cell hears only a fraction of the total noise floor energy, the noise floor at that hair cell will be much lower than the broadband figure of -96dB.
Thus, 16 bit audio can go considerably deeper than 96dB. With use of shaped dither, which moves quantization noise energy into frequencies where itʼs harder to hear, the effective dynamic range of 16 bit audio reaches 120dB in practice [13], more than fifteen times deeper than the 96dB claim.
120dB is greater than the difference between a mosquito somewhere in the same room and a jackhammer a foot away.... or the difference between a deserted ʼsoundproofʼ room and a sound loud enough to cause hearing damage in seconds.
16 bits is enough to store all we can hear, and will be enough forever.
Itʼs worth mentioning briefly that the earʼs S/N ratio is smaller than its absolute dynamic range. Within a given critical band, typical S/N is estimated to only be about 30dB. Relative S/N does not reach the full dynamic range even when considering widely spaced bands. This assures that linear 16 bit PCM offers higher resolution than is actually required.
It is also worth mentioning that increasing the bit depth of the audio representation from 16 to 24 bits does not increase the perceptible resolution or ʼfinenessʼ of the audio. It only increases the dynamic range, the range between the softest possible and the loudest possible sound, by lowering the noise floor. However, a 16-bit noise floor is already below what we can hear.
Professionals use 24 bit samples in recording and production [14] for headroom, noise floor, and convenience reasons.
16 bits is enough to span the real hearing range with room to spare. It does not span the entire possible signal range of audio equipment. The primary reason to use 24 bits when recording is to prevent mistakes; rather than being careful to center 16 bit recording-- risking clipping if you guess too high and adding noise if you guess too low-- 24 bits allows an operator to set an approximate level and not worry too much about it. Missing the optimal gain setting by a few bits has no consequences, and effects that dynamically compress the recorded range have a deep floor to work with.
An engineer also requires more than 16 bits during mixing and mastering. Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, thereʼs no reason to keep more than 16 bits.
Understanding is where theory and reality meet. A matter is settled only when the two agree.
Empirical evidence from listening tests backs up the assertion that 44.1kHz/16 bit provides highest-possible fidelity playback. There are numerous controlled tests confirming this, but Iʼll plug a recent paper, Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback, done by local folks here at the Boston Audio Society.
Unfortunately, downloading the full paper requires an AES membership. However itʼs been discussed widely in articles and on forums, with the authors joining in. Hereʼs a few links:
This paper presented listeners with a choice between high-rate DVD-A/SACD content, chosen by high-definition audio advocates to show off high-defʼs superiority, and that same content resampled on the spot down to 16-bit / 44.1kHz Compact Disc rate. The listeners were challenged to identify any difference whatsoever between the two using an ABX methodology. BAS conducted the test using high-end professional equipment in noise-isolated studio listening environments with both amateur and trained professional listeners.
In 554 trials, listeners chose correctly 49.8% of the time. In other words, they were guessing. Not one listener throughout the entire test was able to identify which was 16/44.1 and which was high rate [15], and the 16-bit signal wasnʼt even dithered!
Another recent study [16] investigated the possibility that ultrasonics were audible, as earlier studies had suggested. The test was constructed to maximize the possibility of detection by placing the intermodulation products where theyʼd be most audible. It found that the ultrasonic tones were not audible... but the intermodulation distortion products introduced by the loudspeakers could be.
This paper inspired a great deal of further research, much of it with mixed results. Some of the ambiguity is explained by finding that ultrasonics can induce more intermodulation distortion than expected in power amplifiers as well. For example, David Griesinger reproduced this experiment [17] and found that his loudspeaker setup did not introduce audible intermodulation distortion from ultrasonics, but his stereo amplifier did.
Itʼs important not to cherry-pick individual papers or ʼexpert commentaryʼ out of context or from self-interested sources. Not all papers agree completely with these results (and a few disagree in large part), so itʼs easy to find minority opinions that appear to vindicate every imaginable conclusion. Regardless, the papers and links above are representative of the vast weight and breadth of the experimental record. No peer-reviewed paper that has stood the test of time disagrees substantially with these results. Controversy exists only within the consumer and enthusiast audiophile communities.
If anything, the number of ambiguous, inconclusive, and outright invalid experimental results available through Google highlights how tricky it is to construct an accurate, objective test. The differences researchers look for are minute; they require rigorous statistical analysis to spot subconscious choices that escape test subjectsʼ awareness. That weʼre likely trying to ʼproveʼ something that doesnʼt exist makes it even more difficult. Proving a null hypothesis is akin to proving the halting problem; you canʼt. You can only collect evidence that lends overwhelming weight.
Despite this, papers that confirm the null hypothesis are especially strong evidence; confirming inaudibility is far more experimentally difficult than disputing it. Undiscovered mistakes in test methodologies and equipment nearly always produce false positive results (by accidentally introducing audible differences) rather than false negatives.
If professional researchers have such a hard time properly testing for minute, isolated audible differences, you can imagine how hard it is for amateurs.
The number one comment I heard from believers in super high rate audio was [paraphrasing]: "Iʼve listened to high rate audio myself and the improvement is obvious. Are you seriously telling me not to trust my own ears?"
Of course you can trust your ears. Itʼs brains that are gullible. I donʼt mean that flippantly; as human beings, weʼre all wired that way.
In any test where a listener can tell two choices apart via any means apart from listening, the results will usually be what the listener expected in advance; this is called confirmation bias and itʼs similar to the placebo effect. It means people ʼhearʼ differences because of subconscious cues and preferences that have nothing to do with the audio, like preferring a more expensive (or more attractive) amplifier over a cheaper option.
The human brain is designed to notice patterns and differences, even where none exist. This tendency canʼt just be turned off when a person is asked to make objective decisions; itʼs completely subconscious. Nor can a bias be defeated by mere skepticism. Controlled experimentation shows that awareness of confirmation bias can increase rather than decreases the effect! A test that doesnʼt carefully eliminate confirmation bias is worthless [18].
In single-blind testing, a listener knows nothing in advance about the test choices, and receives no feedback during the course of the test. Single-blind testing is better than casual comparison, but it does not eliminate the experimenterʼs bias. The test administrator can easily inadvertently influence the test or transfer his own subconscious bias to the listener through inadvertent cues (eg, "Are you sure thatʼs what youʼre hearing?", body language indicating a ʼwrongʼ choice, hesitating inadvertently, etc). An experimenterʼs bias has also been experimentally proven to influence a test subjectʼs results.
Double-blind listening tests are the gold standard; in these tests neither the test administrator nor the testee have any knowledge of the test contents or ongoing results. Computer-run ABX tests are the most famous example, and there are freely available tools for performing ABX tests on your own computer[19]. ABX is considered a minimum bar for a listening test to be meaningful; reputable audio forums such as Hydrogen Audio often do not even allow discussion of listening results unless they meet this minimum objectivity requirement [20].
Above: Squishyball, a simple command-line ABX tool, running in an xterm.
I personally donʼt do any quality comparison tests during development, no matter how casual, without an ABX tool. Science is science, no slacking.
The human ear can consciously discriminate amplitude differences of about 1dB, and experiments show subconscious awareness of amplitude differences under .2dB. Humans almost universally consider louder audio to sound better, and .2dB is enough to establish this preference. Any comparison that fails to carefully amplitude-match the choices will see the louder choice preferred, even if the amplitude difference is too small to consciously notice. Stereo salesmen have known this trick for a long time.
The professional testing standard is to match sources to within .1dB or better. This often requires use of an oscilloscope or signal analyzer. Guessing by turning the knobs until two sources sound about the same is not good enough.
Clipping is another easy mistake, sometimes obvious only in retrospect. Even a few clipped samples or their aftereffects are easy to hear compared to an unclipped signal.
The danger of clipping is especially pernicious in tests that create, resample, or otherwise manipulate digital signals on the fly. Suppose we want to compare the fidelity of 48kHz sampling to a 192kHz source sample. A typical way is to downsample from 192kHz to 48kHz, upsample it back to 192kHz, and then compare it to the original 192kHz sample in an ABX test [21]. This arrangement allows us to eliminate any possibility of equipment variation or sample switching influencing the results; we can use the same DAC to play both samples and switch between without any hardware mode changes.
Unfortunately, most samples are mastered to use the full digital range. Naive resampling can and often will clip occasionally. It is necessary to either monitor for clipping (and discard clipped audio) or avoid clipping via some other means such as attenuation.
Iʼve run across a few articles and blog posts that declare the virtues of 24 bit or 96/192kHz by comparing a CD to an audio DVD (or SACD) of the ʼsameʼ recording. This comparison is invalid; the masters are usually different.
Inadvertant audible cues are almost inescapable in older analog and hybrid digital/analog testing setups. Purely digital testing setups can completely eliminate the problem in some forms of testing, but also multiply the potential of complex software bugs. Such limitations and bugs have a long history of causing false-positive results in testing [22].
The Digital Challenge - More on ABX Testing, tells a fascinating story of a specific listening test conducted in 1984 to rebut audiophile authorities of the time who asserted that CDs were inherently inferior to vinyl. The article is not concerned so much with the results of the test (which I suspect youʼll be able to guess), but the processes and real-world messiness involved in conducting such a test. For example, an error on the part of the testers inadvertantly revealed that an invited audiophile expert had not been making choices based on audio fidelity, but rather by listening to the slightly different clicks produced by the ABX switchʼs analog relays!
Anecdotes do not replace data, but this story is instructive of the ease with which undiscovered flaws can bias listening tests. Some of the audiophile beliefs discussed within are also highly entertaining; one hopes that some modern examples are considered just as silly 20 years from now.
What actually works to improve the quality of the digital audio to which weʼre listening?
The easiest fix isnʼt digital. The most dramatic possible fidelity improvement for the cost comes from a good pair of headphones. Over-ear, in ear, open or closed, it doesnʼt much matter. They donʼt even need to be expensive, though expensive headphones can be worth the money.
Keep in mind that some headphones are expensive because theyʼre well made, durable and sound great. Others are expensive because theyʼre $20 headphones under a several hundred dollar layer of styling, brand name, and marketing. I wonʼt make specfic recommendations here, but I will say youʼre not likely to find good headphones in a big box store, even if it specializes in electronics or music. As in all other aspects of consumer hi-fi, do your research (and caveat emptor).
Itʼs true enough that a properly encoded Ogg file (or MP3, or AAC file) will be indistinguishable from the original at a moderate bitrate.
But what of badly encoded files?
Twenty years ago, all mp3 encoders were really bad by todayʼs standards. Plenty of these old, bad encoders are still in use, presumably because the licenses are cheaper and most people canʼt tell or donʼt care about the difference anyway. Why would any company spend money to fix what itʼs completely unaware is broken?
Moving to a newer format like Vorbis or AAC doesnʼt necessarily help. For example, many companies and individuals used (and still use) FFmpegʼs very-low-quality built-in Vorbis encoder because it was the default in FFmpeg and they were unaware how bad it was. AAC has an even longer history of widely-deployed, low-quality encoders; all mainstream lossy formats do.
Lossless formats like FLAC avoid any possibility of damaging audio fidelity [23] with a poor quality lossy encoder, or even by a good lossy encoder used incorrectly.
A second reason to distribute lossless formats is to avoid generational loss. Each reencode or transcode loses more data; even if the first encoding is transparent, itʼs very possible the second will have audible artifacts. This matters to anyone who might want to remix or sample from downloads. It especially matters to us codec researchers; we need clean audio to work with.
The BAS test I linked earlier mentions as an aside that the SACD version of a recording can sound substantially better than the CD release. Itʼs not because of increased sample rate or depth but because the SACD used a higher-quality master. When bounced to a CD-R, the SACD version still sounds as good as the original SACD and better than the CD release because the original audio used to make the SACD was better. Good production and mastering obviously contribute to the final quality of the music [24].
The recent coverage of ʼMastered for iTunesʼ and similar initiatives from other industry labels is somewhat encouraging. What remains to be seen is whether or not Apple and the others actually ʼget itʼ or if this is merely a hook for selling consumers yet another, more expensive copy of music they already own.
Another possible ʼsales hookʼ, one Iʼd enthusiastically buy into myself, is surround recordings. Unfortunately, thereʼs some technical peril here.
Old-style discrete surround with many channels (5.1, 7.1, etc) is a technical relic dating back to the theaters of the 1960s. It is inefficient, using more channels than competing systems. The surround image is limited, and tends to collapse toward the nearer speakers when a listener sits or shifts out of position.
We can represent and encode excellent and robust localization with systems like Ambisonics. The problems are the cost of equipment for reproduction and the fact that something encoded for a natural soundfield both sounds bad when mixed down to stereo, and canʼt be created artificially in a convincing way. Itʼs hard to fake ambisonics or holographic audio, sort of like how 3D video always seems to degenerate into a gaudy gimmick that reliably makes 5% of the population motion sick.
Binaural audio is similarly difficult. You canʼt simulate it because it works slightly differently in every person. Itʼs a learned skill tuned to the self-assembling system of the pinnae, ear canals, and neural processing, and it never assembles exactly the same way in any two individuals. People also subconsciously shift their heads to enhance localization, and canʼt localize well unless they do. Thatʼs something that canʼt be captured in a binaural recording, though it can to an extent in fixed surround.
These are hardly impossible technical hurdles. Discrete surround has a proven following in the marketplace, and Iʼm personally especially excited by the possibilities offered by Ambisonics.
"I never did care for music much. Itʼs the high fidelity!" —Flanders & Swann, A Song of Reproduction
The point is enjoying the music, right? Modern playback fidelity is incomprehensibly better than the already excellent analog systems available a generation ago. Is the logical extreme any more than just another first world problem? Perhaps, but bad mixes and encodings do bother me; they distract me from the music, and Iʼm probably not alone.
Why push back against 24/192? Because itʼs a solution to a problem that doesnʼt exist, a business model based on willful ignorance and scamming people. The more that pseudoscience goes unchecked in the world at large, the harder it is for truth to overcome truthiness... even if this is a small and relatively insignificant example.
"For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." —Carl Sagan
Readers have alerted me to a pair of excellent papers of which I wasnʼt aware before beginning my own article. They tackle many of the same points I do in greater detail.
* Coding High Quality Digital Audio by Bob Stuart of Meridian Audio is beautifully concise despite its greater length. Our conclusions differ somewhat (he takes as given the need for a slightly wider frequency range and bit depth without much justification), but the presentation is clear and easy to follow. [Edit: I may not agree with many of Mr. Stuartʼs other articles, but I like this one a lot.] * Sampling Theory For Digital Audio [Updated link 2012-10-04] by Dan Lavry of Lavry Engineering is another article that several readers pointed out. It expands my two pages or so about sampling, oversampling, and filtering into a more detailed 27 page treatment. Worry not, there are plenty of graphs, examples and references.
Stephane Pigeon of audiocheck.net wrote to plug the browser-based listening tests featured on his web site. The set of tests is relatively small as yet, but several were directly relevant in the context of this article. They worked well and I found the quality to be quite good.
—Monty (firstname.lastname@example.org) March 1, 2012 last revised March 25, 2012 to add improvements suggested by readers. Edits and corrections made after this date are marked inline, except for spelling errors spotted on Dec 30, 2012 and March 15, 2014, and an extra ʼisʼ removed on April 1, 2013]
[IMG]Montyʼs articles and demo work are sponsored by Red Hat Emerging Technologies. (C) Copyright 2012 Red Hat Inc. and Xiph.Org
Special thanks to Gregory Maxwell for technical contributions to this article
HackerNewsBot debug: Calculated post rank: 128 - Loop: 253 - Rank min: 100 - Author rank: 104
#agent peter strzok #america #andrew mccabe #anti-trump dossier #attorney michael cohen #clinton #clinton investigation #cohen #cohen testimony #collusion #congress #congressman #credibility #credible #democrat #democrats #demoted #donald trump #donald trump administration #dossier #fired #former fbi deputy director andrew mccabe #former lawyer #house committee #house oversight #house oversight committee #interview #investigation #jim #jim baker #jim comey #jim jordan #jordan #liar #lie #lie to congress #lisa page #lying #lying to congress #maga #make america great again #michael #michael cohen #michael cohen liar #michael cohen testimony #no collusion #no credibility #oan newsroom #ohio #ohio republican #ohio republican jim jordan #oversight and reform #peter strzok #prague #president donald trump #president of the united states #president of the united states of america #president trump #representative jim jordan #representative jordan #republican #republican jim jordan #republicans #russia investigation #talks #testify #testimony #the trump administration #the united states of america #the united states #trump #trump administration #trump campaign #united states #united states of america #usa #witness
#2019 #conservative political action conference #cpac #cpac 2019 #danger #debate #democrat #democratic #democrats #democrats socialism #donald trump #election #elections #freedom of speech #green new deal #growing #left #maga #make america great again #next election cycle #oan newsroom #president #president donald trump #president trump #religious liberty #republican #republicans #socialism #speech #the green new deal #trump #usa #warns #winning against the swamp
#agricultural #agricultural goods #amid #asks #beef #china #china tariffs #chinese #chinese president #chinese president xi jinping #chinese tariffs #deadline #donald trump #farmers #imports #maga #make america great again #mar-a-lago #march #negotiate #negotiations #oan newsroom #pork #president #president donald trump #president trump #remove #remove china tariffs #remove chinese tariffs #summit #talks #tariffs #the united states of america #the united states #trade #trade discussions #trade talks #trump #trump administration #united states #united states of america #us imports #usa #xi jinping
People already get the names wrong, so the USB group has doubled down on bad naming.
Article word count: 357
HN Discussion: https://news.ycombinator.com/item?id=19258551
Posted by nottorp (karma: 1401)
Post stats: Points: 145 - Comments: 81 - 2019-02-26T21:42:38Z
#HackerNews #branding #current #even #going #make #the #usb #worse
USB Type-C cable and port.
Enlarge / USB Type-C cable and port.
USB 3.2, which doubles the maximum speed of a USB connection to 20Gb/s, is likely to materialize in systems later this year. In preparation for this, the USB-IF—the industry group that together develops the various USB specifications—has announced the branding and naming that the new revision is going to use, and... itʼs awful.
USB 3.0 was straightforward enough. A USB 3.0 connection ran at 5Gb/s, and slower connections were USB 2 or even USB 1.1. The new 5Gb/s data rate was branded "SuperSpeed USB," following USB 2ʼs 480Mb/s "High Speed" and USB 1.1ʼs 12Mb/s "Full Speed."
But then USB 3.1 came along and muddied the waters. Its big new feature was doubling the data rate to 10Gb/s. The logical thing would have been to identify existing 5Gb/s devices as "USB 3.0" and new 10Gb/s devices as "USB 3.1." But thatʼs not what the USB-IF did. For reasons that remain hard to understand, the decision was made to retroactively rebrand USB 3.0: 5Gb/s 3.0 connections became "USB 3.1 Gen 1," with the 10Gb/s connections being "USB 3.1 Gen 2." The consumer branding is "SuperSpeed USB 10Gbps."
What this branding meant is that many manufacturers say that a device supports "USB 3.1" even if itʼs only a "USB 3.1 Gen 1" device running at 5Gb/s. Meanwhile, other manufacturers do the sensible thing: they use "USB 3.0" to denote 5Gb/s devices and reserve "USB 3.1" for 10Gb/s parts.
USB 3.2 doubles down on this confusion. 5Gb/s devices are now "USB 3.2 Gen 1." 10Gb/s devices become "USB 3.2 Gen 2." And 20Gb/s devices will be... "USB 3.2 Gen 2×2." Because they work by running two 10Gb/s connections along different pairs of wires simultaneously, and itʼs just obvious from arithmetic that youʼd number the generations "1, 2, 2×2." Perhaps theyʼre named for powers of two, starting with zero? The consumer branding is a more reasonable "SuperSpeed USB 20Gbps."
The good part of all this is that USB 3.2 could mean 5, 10, or 20Gbps. You can bet that there will be manufacturers who are going to exploit that confusion wherever and whenever they can.
HackerNewsBot debug: Calculated post rank: 123 - Loop: 142 - Rank min: 100 - Author rank: 29
#4th #4th of july #america #america independence day #american #americans #donald trump administration #event #firework display #fireworks #fourth of july #gathering #history #hold #hold the date #independence day #july #july 4th #july fourth #lincoln #lincoln memorial #live entertainment #maga #make america great again #memorial #oan newsroom #president #president donald trump #president trump #presidential address #salute #the trump administration #the united states of america #the united states #trump #trump administration #united states #united states of america #usa #washington #washington d.c. #washington dc
Article word count: 1736
HN Discussion: https://news.ycombinator.com/item?id=19225873
Posted by turingbook (karma: 1331)
Post stats: Points: 141 - Comments: 78 - 2019-02-22T15:01:50Z
#HackerNews #developers #hate #how #make #other #with #work #you
We’ve all read those 10x developer articles (I wrote some – guilty as charged!). So if you want to know what you need to work on to improve…well, you have plenty of resources. But I have very seldom come across articles on what NOT to do or how NOT to behave as a developer. And actually, this may be the most important part of the equation!
So, long overdue, here is what I think is the top list of behaviors you should really work on fast, if you do any of them ;). Why? Well, you might not know it, but your co-workers might hate you for them, as you most likely negatively impact the whole productivity of the team – at the very least!
If you have one of these developers on your team, it might be worthwhile to share this article in your Slack team channel – just out of general interest, you know 😉!
I will try to prioritize the list from most to least impactful. The goal for me is to start the discussion on the list and prioritize it, too. So please comment.
That’s the first one, in my mind. You cannot work with a self-absorbed developer. I’ll even go so far as to say:
As long as you are willing to take responsibility for and learn from your mistakes, you’re not a bad developer. Click To Tweet
Arrogance makes you think that your code is perfect. You may even blame customers for being stupid and for crashing their program rather than reflect on why your software crashed. And that’s how you get:
But also messy, unreadable code for your teammates.
The problem with arrogance is that it is a behavior that will prevent you from improving. Stop being arrogant, or you’re just a lost cause.
Some of you may already know the Dunning-Kruger effect. We will mention this effect a few times in the list. Here is a graph explaining it:
The issue with arrogance is that 1) the developers don’t understand they are on top of the Peak of “Mt. Stupid,” and 2) they will stay there.
- Sloppiness in the Work Delivered
There are many ways developers can show sloppiness in the code they deliver. We all know at least one developer who:
* gives cryptic names for variables, or at best not self-explanatory
* puts typos in function names
* leaves old, outdated comments in the code
* shows a poor selection of data types and data structures
* doesn’t bother to run the code formatter, despite being told many times to do it
* ignores the IDE warnings
* copies and pastes StackOverflow code without understanding it or tweaking the solutions to fit their own code
* doesn’t take the time to document code (nobody wants to read the whole function or file to understand what it does)
* doesn’t handle errors properly
* uses excessive dependencies, and updates them without thinking
* doesn’t bother to understand the libraries or tools added to the code, potentially leaving glaring issues
* will always insist on following “best practices” without understanding why those practices are considered “best” (there is no such thing as best practices that adapt to every team)
Don’t be such a developer. They annoy the hell out of their colleagues. They slow the whole team’s development process down, requiring their teammates to spend unnecessary time on their code reviews. Their team will dread those code reviews, will grow impatient (we’re still humans), and bugs will get through the net.
The best way to solve this is for these developers to start to take pride in their work (not to be confused with the arrogance mentioned in point 1.
- Disrespect of Other People’s Time
The two thing developers hate most are interruptions and unnecessary meetings. That shouldn’t come as a surprise, as meetings are just scheduled interruptions. Developers can’t easily go back to where they were right before an interruption. They need to get into the mindset for development and then slowly trace back to where they left off. And every fellow developer knows that.
So, here are a few ways you can show disrespect to your colleague’s time and productivity:
* interrupting another developer who is clearly in the zone, for non-important stuff;
* constantly arriving late to meetings, which is a definite choice – whatever anyone says. Either the participants must wait for everyone to be there to start the meeting, or they start without the late developer. In the latter case, he or she will need to be brought up to speed at some point, hence some time lost, and arriving late will disrupt the flow of the meeting in any case;
* rambling on and on during meetings. Or, if there are non-coders in the audience, being unwilling to adapt to the audience and wasting time for the entire audience, as any point made will need to be explained again.
- Constant Negativity
Most developers are enthusiastic people, but sometimes you may have the chance (or misfortune) to work with a negative one. Negativity is infectious. If someone complains, it focuses the attention on the negative side of things.
They will criticize every choice made: the language, for instance, although, most of the time, those developers are clearly at the top of Mount Stupid (in the Dunning-Kruger Effect).
Don’t misunderstand me; there should be some criticism in the form of constructive opinions. For example, a Scala developer could talk to a Java developer about promises, saying, “Okay, your language is not as good as mine :P. But you could try CompletableFuture to have a taste of what a monad is. I will show you what you can do with that.” But unfortunately, that kind of friendly attitude is very rare these days.
I’m sure you have all seen a developer once in a while steal credit for the work produced by a team. This can be done through an email to management, a 1-on-1 talk or another sneaky non-straightforward way.
Developers value competence above all. Taking credit for someone else is taking the other’s competence for yourself and removing it from him or her. This is pretty high up on my list, as I feel it creates a lot of tension and distrust.
For greedy developers, such strategies might produce short-term visibility. But in the long run, they will be alienated. Other team members will evolve their communication to highlight their contributions better. After all, there are many ways to give credit.
- Disregard for The Team
Software engineering is done collaboratively with designers, product managers, and other developers. Respecting other people’s input and work is necessary if you don’t want them to go into Hulk mode and flip their desk. For instance:
* “How” documentation: many programmers comment on every single line of code without describing why it’s doing what it’s doing. If there were a bug in the program and you stumbled across this code, you wouldn’t know where to begin.
* Implementing an ugly or not-to-the-specs UI “because they’re not a designer”
* Not mentioning a UX problem to the product manager, because it’s not part of their job. Ignoring the big picture will make the software hard to use, expensive to maintain, and inconsistent with the other components.
* Not trying to understand how design or product decisions are made. And then continuing to ask the same irrelevant questions – and not improving.
* Not considering other team members’ priority dependencies and leaving them stuck in the mud.
* Using a new tool/library without warning any teammates. This can cause unforeseen issues down the line.
- Lack of Focus
Engineering teams solve problems. They use their technical abilities to build features/fix bugs to solve those problems. And some developers just forget about this and will:
* philosophize about technical topics instead of focusing on the problems
* argue obstinately about technical topics without considering the initial problem (although you do, of course, need to argue when building the solution to the problem)
* have lengthy discussions about those technical topics yet rely on their own opinions (instead of facts – facts solve problems, not opinions)
With code, sure, you can have several solutions to the same problem, but either it works or it doesn’t; there is no in-between. With focus, you can easily alleviate all uncertainties by trying out code in a sandbox, for instance. But lack of focus wastes the time and productivity of everyone involved.
- Lack of Accountability
As mentioned above, either the code works or it doesn’t…but it needs to work in combination with all the code being added to the codebase by your teammates. Software engineering is probably the most collaborative work in today’s world. Any code you write will interact with that of other developers.
So, for your team to work well, you need accountability. Sure, code reviews don’t let you get away with anything. But accountability is an attitude.
Unaccountable developers will, for instance, offer excuses instead of solutions. Those excuses may include time constraints or complexity of the tasks. Nobody wants to hear excuses; they want to understand the steps to be taken toward the solution. Excuses don’t invite others to help or provide a good picture of the task’s progress.
This is my list. Feel free to add more if you think of any, or to suggest a different order of importance.
The first thing you should know is that this means your manager is not doing their job. The issue should have been identified and the problematic developer(s) coached — if they were deemed coachable. The manager should have given warnings and made the hard decision if the bad developers were still impacting the team.
A team with a bad developer is way better off short one developer than it is with a bad element. Click To Tweet
A manager who doesn’t understand this is a manager who doesn’t understand software engineering. You have the case for a bad manager, but that’s for another article ;).
So what do you do? I would say this is a question to raise in your one-on-one with your manager, so they can address the issue. If your manager does nothing, you have several options: see if the developer can be coached, and take it upon yourself (and with the cooperation of other teammates), or change teams/companies. Hopefully, this article can help convince the said developer to be a better co-worker.
HackerNewsBot debug: Calculated post rank: 120 - Loop: 194 - Rank min: 100 - Author rank: 45
#2020 campaign #2020 elections #america #border #border security #border wall #build that wall #campaign season #el paso #el paso county coliseum #el paso texas #greg abbott #maga #make america great again #make america great again rally #oan newsroom #paso #preps #president #president donald trump #president trump #president trump 2020 #protests #rally #senator #senator cruz #senator ted cruz #support #support border wall #ted cruz #texas governor #texas governor abbott #texas governor greg abbott #trump #trump 2020 #united states of america #usa
I have £5k to spare and want to make 5 separate investments, VC-like, such that even if 4 fail, one grows 10x over 5 years. What should those investments be?
HN Discussion: https://news.ycombinator.com/item?id=19123443
Posted by ratsimihah (karma: 360)
Post stats: Points: 54 - Comments: 74 - 2019-02-09T17:23:51Z
#HackerNews #£1k #ask #each #investments #make #should #which
HackerNewsBot debug: Calculated post rank: 60 - Loop: 474 - Rank min: 60 - Author rank: 8
Amazon and others found that 100 milliseconds of latency is responsible for 1% in sales. But latency on the web is hard to overcome. Cheating latency instant.page uses just-in-time preloading — it…
Article word count: 37
HN Discussion: https://news.ycombinator.com/item?id=19122727
Posted by dieulot (karma: 1630)
Post stats: Points: 149 - Comments: 50 - 2019-02-09T15:30:00Z
#HackerNews #instant #make #minute #pages #show #sites #your
Amazon and others found that 100 milliseconds of latency is responsible for 1% in sales. But latency on the web is hard to overcome.
instant.page uses just-in-time preloading — it preloads a page right before a user clicks on it.
Before a user clicks on a link, they hover their mouse over that link. When a user has hovered for 65 ms there is one chance out of two that they will click on that link, so instant.page starts preloading at this moment, leaving on average over 300 ms for the page to preload.
On mobile, a user starts touching their display before releasing it, leaving on average 90 ms to preload the page.
You can also click the menu to experience it.
Cheating the brain
The humain brain perceives actions taking less than 100 ms as instant. As a result, instant.page makes your site feel instant even on 3G (assuming your pages are fast to render).
Jakob Nielsen: Response Times: The 3 Important Limits:
0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.
Easy on your server and your user’s data plan
Pages are preloaded only when there’s a good chance that a user is going to visit them, and it preloads only the HTML of that page, being respectful of your users’ and servers’ bandwidth and CPU. It’s 1 kB and loads after everything else. And it’s free and open source (MIT license).
HackerNewsBot debug: Calculated post rank: 116 - Loop: 80 - Rank min: 100 - Author rank: 196
#act #border security #border wall #border wall deal #congress #deal #emergency #fails #former president george w. bush #government funding #graham #immigration #make #must #oan newsroom #powers #president #president barrack obama #president george w. bush #sen #sen. lindsey graham #through
Make your own smart street lighting system, Learn How to make Automatic Street Light Project Using LDR very simple and easy way (No Soldering Skills Required) by using low cost materials. Smart street light or intelligent street lighting system adapts surrounding lights such as sun light to switch ON/OFF the Street Lights. I hope this DIY tutorial will be useful for you!
Make your own smart street lighting system, Learn How to make Automatic Street Light Project Using LDR very simple and easy way (No Soldering Skills Required) by using low cost materials. Smart street light or intelligent street lighting system adapts surrounding lights such as sun light to switch ON/OFF the Street Lights. I hope this DIY tutorial will be useful for you!
By now, it should surprise no one to hear that artificial intelligence has a bias problem. People program their societal prejudices into algorithms all the time, often without meaning to. For instance, most image-recognition algorithms correctly identify women in flowing white dresses as “brides” but fail to do so for Indian women wearing wedding saris.