Items tagged with: Factory
Bernie undermines..... #Bernie....with #capitalism....
Bernie's dreams of socialism destroyed by his best selling book:
- ### Providing an in-demand product justifies high incomes for those who produce it:
When confronted as being among those he has long villainized, #Sanders became defensive:
“I wrote a best-selling book,” he declared. “If you write a best-selling book, you can be a millionaire, too.”
"I made my money fair and square, so quit hassling me about it."
You can make a lot of money if you work at it:
...one of the central myths of the Bernie Sanders wing of the #American #left is that people who become #rich do so on the backs of the #poor.
"I wrote a best-selling book. If you write a best-selling book, you can be a #millionaire, too."
"You can make a lot of money if you work at it. It's just that easy!"
This sounds #suspiciously like the " #bootstraps" argument which the left so often and so vehemently rejects. And it's pretty odd to hear it from Bernie Sanders.
Millionaires and #Billionaires provide a #benefit to #society:
had Sanders merely written a book, he would not have made millions. But since he wrote a #best-selling book, that brought in a lot of #money. But if the key to getting rich is to sell something a lot of people want, doesn't that show that "millionaires and billionaires" are providing a benefit to society?
Bernie didn't #exploit anyone to become a millionaire:
As to the matter of #inequality, Sanders hints at its origins. What if not everyone writes a best-selling book? Wouldn't that mean some people have millions of dollars and other people have much smaller amounts of money? Wouldn't that create inequality?
Sanders apparently believes it is possible to produce a good or service without #exploiting the #workers. After all, his books did not appear by magic out of thin air. Human beings #worked to #print the books, #assemble them, and #distribute them. Sanders made money from their #labors. In fact, he made a lot more money than any truck driver who #delivered his books to stores. He made a lot more than any #factory worker who #assembled the books. Does Sanders believe he took #advantage of these workers? Apparently not. Sanders #justifies his #wealth as a matter of writing books. He doesn't mention anything about the workers at all.
1. He believes that #providing an #in-demand #product or #service #justifies high incomes for those who #produced it.
3. If you do the same, you can be a millionaire, too.
That's quite a little #capitalist #manifesto from Sanders. #funny #democrat #socialism #fail
This post discusses the benefits of full-stack data science generalists over narrow functional specialists. The later will help you execute and bring process...
HN Discussion: https://news.ycombinator.com/item?id=19361208
Posted by ericcolson (karma: 59)
Post stats: Points: 132 - Comments: 36 - 2019-03-11T18:05:08Z
#HackerNews #beware #data #factory #generalist #pin #power #science #the
In The Wealth of Nations, Adam Smith demonstrates how the division of labor is the chief source of productivity gains using the vivid example of a pin factory assembly line: “One [person]^1 draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it.” With specialization oriented around function, each worker becomes highly skilled in a narrow task leading to process efficiencies. Output per worker increases many fold; the factory becomes extremely efficient at producing pins.
This division of labor by function is so ingrained in us even today that we are quick to organize our teams accordingly. Data science is no exception. An end-to-end algorithmic business capability requires many data functions and companies usually create teams of specialists: research scientist, data engineers, machine learning engineers, causal inference scientists, and so on. Specialists’ work is coordinated by a product manager, with hand-offs between the functions in a manner resembling the pin factory: “one person sources the data, another models it, a third implements it, a fourth measures it” and on and on.
Alas, we should not be optimizing our data science teams for productivity gains; that is what you do when you know what it is you’re producing—pins or otherwise—and are merely seeking incremental efficiencies. The goal of assembly lines is execution. We know exactly what we want—pins in Smith’s example, but one can think of any product or service in which the requirements fully describe all aspects of the product and its behavior. The role of the workers is then to execute on those requirements as efficiently as possible.
But the goal of data science is not to execute. Rather, the goal is to learn and develop profound new business capabilities. Algorithmic products and services like recommendations systems, client engagement bandits, style preference classification, size matching, fashion design systems, logistics optimizers, seasonal trend detection, and more can’t be designed up-front. They need to be learned. There are no blueprints to follow; these are novel capabilities with inherent uncertainty. Coefficients, models, model types, hyper parameters, all the elements you’ll need must be learned through experimentation, trial and error, and iteration. With pins, the learning and design are done up-front, before you produce them. With data science, you learn as you go, not before you go.
In the pin factory, when learning comes first we do not expect, nor do we want, the workers to improvise on any aspect the product, except to produce it more efficiently. Organizing by function makes sense since task specialization leads to process efficiencies and production consistency (no variations in the end product).
But when the product is still evolving and the goal is to learn, specialization hinders our goals in several ways:
- It increases coordination costs. Those are the costs that accrue in time spent communicating, discussing, justifying, and prioritizing the work to be done. These costs scale super-linearly with the number of people involved.^2 When data scientists are organized by function the many specialists needed at each step, and with each change, and each handoff, and so forth, make coordination costs high. For example, a data science specialists focused on statistical modeling will have to coordinate with a data engineer any time a dataset needs to be augmented in order to experiment with new features. Similarly, any time new models are trained the research scientist will have to coordinate with a machine learning engineer to deploy them to production, etc. These coordination costs act as a tax on iteration and can hamper learning.
- It exacerbates wait-time. Even more nefarious than coordinate costs is the time that elapses between work. While coordination costs can typically be measured in hours—the time it takes to hold meetings, discussions, design reviews—wait-times are commonly measured in days or weeks or even months! Schedules of functional specialists are difficult to align as each specialist is allocated to several initiatives. A one-hour meeting to discuss changes may take weeks to line up. And, once aligned on the changes, the actual work itself also needs to be scheduled in the context of multiple other projects vying for specialists’ time. Work like code changes or research that requires just a few hours or days to complete still may sit undone much longer before the resources are available. Until then, iteration and learning languish.
- It narrows context. Division of labor can artificially limit learning by rewarding people for staying in their lane. For example, the research scientist who is relegated to stay within her function will focus her energy towards experimenting with different types algorithms: gradient boosting, neural nets, random forest, and so on. To be sure, good algorithm choices could lead to incremental improvements. But there is usually far more to gain from other activities like integrating new data sources. Similarly, she may develop a model that exhausts every bit of explanatory power inherent to the data. Yet, her biggest opportunity may lie in changing the objective function or relaxing certain constraints. This is hard to see or do when her job function is limited. Since the research scientist is specialized in optimizing algorithms, she’s far less likely to pursue anything else, even when it carries outsized benefits.
Telling symptoms can surface when data science teams are run like pin factories, for example in simple status updates: “waiting on ETL changes” and “waiting on ML Eng resources” are common blockers. However, I believe the more insidious impact lies in what you don’t hear, because you can’t lament what you haven’t yet learned. Perfect execution on requirements and complacency brought on by achieving process efficiencies can mask the difficult truth, that the organization is blissfully unaware of the valuable learnings they are missing out on.
The solution to this problem is, of course, to get rid of the pin factory. In order to encourage learning and iteration, data science roles need to be made more general, with broad responsibilities agnostic to technical function. That is, organize the data scientists such that they are optimized to learn. This means hiring “full stack data scientists”—generalists—that can perform diverse functions: from conception to modeling to implementation to measurement. With fewer^3 people to keep in the loop, coordination costs plummet. The generalist moves fluidly between functions, extending the data pipeline to add more data, trying new features in the model, deploying new versions to production for causal measurement, and repeating the steps as quickly as new ideas come to her. Of course, the generalist performs the different functions sequentially rather than in parallel—she is just one person after all. However, doing the work typically takes just a fraction of the wait-time it would take for another specialist resource to come available. So, iteration time goes down.
Our generalist may not be as adept as a specialist in any one function. But we are not seeking functional excellence or small incremental improvements. Rather, we seek to learn and discover all-new business capabilities with step-change impact. With full context for the holistic solution she sees opportunities that a narrow specialist won’t. She has more ideas and tries more things. She fails more, too. However, the cost of failure is low and the benefits of learning are high. This asymmetry favors rapid iteration and rewards learning.
It is important to note that this amount of autonomy and diversity in skill granted to the full-stack data scientists depends greatly on the assumption of a solid data platform on which to work. A well constructed data platform abstracts the data scientists from the complexities of containerization, distributed processing, automatic failover, and other advanced computer science concepts. In addition to abstraction, a robust data platform can provide seamless hooks into an experimentation infrastructure, automate monitoring and alerting, provide auto-scaling, and enable visualization of debugging output and algorithmic results. These components are designed and built by data platform engineers, but to be clear, there is not a hand-off from the data scientist to a data platform team. It’s the data scientist that is responsible for all the code that is deployed to run on top of the platform. And, for the love of everything sacred and holy in the profession, don’t hand-off ETL for engineers to write.
I too was once lured to a function-based division of labor by the attraction of process efficiencies. But, through trial and error (is there no better way to learn?) I’ve found that more generalized roles better facilitate learning and innovating,^4 and provide the right kinds of scaling: to discover and build many more business capabilities than a specialist approach. And, while there are some important considerations^5 that may make this approach to organization more or less tenable in some companies (see footnote), I believe the full stack data scientist model provides a better starting place. Start with them, and then consciously (grudgingly) move toward a function-based division of labor only when clearly necessary.
There is further downside to functional specialization. It can lead to loss of accountability and passion from the workers. Smith himself criticizes the division of labor, suggesting that it leads to the dulling of talent—that workers become ignorant and insular as their roles are confined to a few repetitive tasks.^6 While specialization may provide process efficiencies it is less likely to inspire workers.
By contrast, generalist roles provide all the things that drive job satisfaction: autonomy, mastery, and purpose.^7 Autonomy in that they are not dependent on someone else for success. Mastery in that they know the business capability from end-to-end. And, purpose in that they have a direct connection to the impact on the business they’re making. If we succeed in getting people to be passionate about their work and making a big impact on the company, then the rest falls into place naturally.
↩ I took the liberty of modernizing Smith’s use of pronouns.
↩ As J. Richard Hackman taught us, the number of relationships (r) grows as a function number of members (n) per this equation: r = (n^2-n) / 2. And, each relationship bares some amount of coordination costs. See: Hackman, J. Richard. Leading teams: setting the stage for great performances. Boston, Mass.: Harvard Business School Press, 2002. Print.
↩ It’s important to note that I am not suggesting that hiring full-stack data scientists results in fewer people overall. Rather, I am merely suggesting that when organized differently, their incentives are better aligned with learning vs. efficiency gains. Consider the following contrasting deptarment/team structures, each with 3 people. Fractional estimates and summed team sizes are illustrative only.
Specialist Model: organized for functional efficiency. Workers are not dedicated to any one business capability, rather their time is allocated to many.
Business Capability (columns) Recommendation System Algorithmic Inventory Management System Algorithmic Client Engagement System Team Size
ML Engineering 1/3 1/3 1/3 1
Modeling 1/3 1/3 1/3 1
Data Engineering 1/3 1/3 1/3 1
Total Dept Size 3
Generalists Model: Full-stack Data Scientists optimized for learning. Workers are fully dedicated to a business capability and perform all the functions.
Business Capability (columns) Recommendation System Algorithmic Inventory Management System Algorithmic Client Engagement System Total Dept Size
ML Engineering 1 1 1
Team Size 1 1 1 3
↩ A more efficient way to learn about this approach to organization vs the trial and error I went through is to read the book by Amy C. Edmondson called “Teaming: How Organizations Learn, Innovate, and Compete in the Knowledge Economy” (Jossey-Bass, 2014).
↩ This process of iteration assumes low cost of trial and error. If the cost of error is high you may want to rethink (i.e., it is not advised for medical or manufacturing). In addition, data volume and system availability requirements should also be considered. If you are dealing with petabytes or exabytes of data, specialization in data engineering may be warranted. Similarly, system availability (ie. uptime) and innovation are tradeoffs. If availability is paramount, functional excellence may trump learning. Finally, the full-stack data science model relies on the assumption of great people. They are not unicorns; they can be found as well as made. But they are in high demand and it will require certain conditions in order to attract and retain them (competitive compensation, company values, interesting work, etc.). Be sure your company culture can support this.
↩ Smith, Adam. An inquiry into the nature and causes of the wealth of nations. Dublin: Printed for Messrs. Whitestone, 1776. Print. Page 464.
↩ Pink, Daniel H.. Drive: the surprising truth about what motivates us. New York, NY: Riverhead Books, 2009.
HackerNewsBot debug: Calculated post rank: 100 - Loop: 319 - Rank min: 100 - Author rank: 118
Designed to get work done.
Article word count: 4397
HN Discussion: https://news.ycombinator.com/item?id=19239674
Posted by T-A (karma: 8433)
Post stats: Points: 104 - Comments: 38 - 2019-02-24T18:02:08Z
#HackerNews #3500 #factory #for #headset #hololens #microsofts #mixed-reality #the
I am in a tiny room in a basement somewhere in Microsoft’s Redmond, Washington headquarters, wearing an early version of the HoloLens 2 headset. In front of me is a very real ATV, which is missing a bolt. Not quite at the corner of my vision — but certainly off to the side — I see a glowing indicator pointing to a bucket of the right bolts. I walk over, bend down to look more closely at the shelf, and grab one.
Back at the ATV, a holographic set of instructions hovers above it, telling me what to do and pointing to the exact spot where the bolt needs to go. After a couple of minutes, I’ve successfully fixed the thing — guided by holograms. I tap a holographic button on the guide to close out the instructions.
This sort of demo is quickly becoming commonplace for tech journalists like myself. But if you read the previous description closely, you’ll find that there are three key pieces of technical innovation hidden in plain sight.
Here they are: I saw a hologram off to the side because the field of view in which they can appear is much larger than before. I bent down and didn’t worry about an awkward headset shifting around because it was better balanced on my head. I pushed a button just by pushing a button because I didn’t need to learn a complicated gesture to operate the HoloLens 2.
Those three things might not seem all that remarkable to you, but that’s precisely the point. Microsoft needed to make the HoloLens feel much more natural if it really plans to get people to use it, and it has.
There’s one more unremarkably remarkable thing: even though it was just a demo, I was playing the part of a worker because that’s who the HoloLens 2 is exclusively designed for — workers, not consumers.
The Microsoft HoloLens 2 is available for preorder today for $3,500, and it’s expected to ship later this year. However, Microsoft has decided that it is only going to sell to enterprise customers who want to deploy the headset to their workers. As of right now, Microsoft isn’t even announcing a developer kit version of the HoloLens 2.
Compared to the HoloLens we first saw demonstrated four years ago, the second version is better in nearly every important way. It’s more comfortable, it has a much larger field of view, and it’s better able to detect real physical objects in the room. It features new components like the Azure Kinect sensor, an ARM processor, eye-tracking sensors, and an entirely different display system.
It has a couple of speakers, the visor flips up, and it can see what your hands are doing more accurately than before. There’s an 8-megapixel front-facing camera for video conferencing, it’s capable of full 6 degrees of tracking, and it also uses USB-C to charge. It is, in short, chock-full of new technology. But after four years, that should be no surprise.
Before we go any further, a disclosure: my wife works for Oculus, the VR division of Facebook. We don’t discuss her work or the stories that I am working on. It doesn’t affect how I think about this tech, and I have no idea if Facebook and Oculus are building anything directly competitive with this product.
The biggest complaint about the first HoloLens was simple: you only saw the holograms in a relatively small box, directly in front of you. Turn your head even a little, and they would disappear from your field of view. Worse, their edges would clip out of existence even when you were staring right at them. It was like looking at a digital world through a tiny rectangle.
The HoloLens 2 has a field of view that’s twice as big as before. It doesn’t quite fill your entire field of vision — there’s still clipping — but it’s big enough now that you no longer feel constantly annoyed by a letterbox. Microsoft says that each eye has the equivalent of a 2K display in front of it, but it’s better to think of that as a metaphor than a precise spec. The exact spec is that it has a “holographic density of 47 pixels per degree,” which means that the pixel density is high enough to allow you to read 8-point font.
Typically, when a tech product gets better specs like these, it happens through sheer force of technical iteration: faster processors, bigger batteries, more RAM, and so on. But that strategy wouldn’t have worked for the display on the HoloLens 2. It needed to get lighter, not heavier. So Microsoft had to completely change over to a different kind of display technology.
Lasers and mirrors
Laser-based displays have become the thing to do for computers on your face. Intel’s Vaunt project used lasers, and the North Focals smart glasses do, too. Although Microsoft is using some of the same basic components, it’s taken them in a different direction and gone much further in developing what they can do.
The lasers in the HoloLens 2 shine into a set of mirrors that oscillate as quickly as 54,000 cycles per second so the reflected light can paint a display. Those two pieces together form the basis of a microelectromechanical system (MEMS) display. That’s all tricky to make, but the really tricky part for a MEMS display is getting the image that it paints into your eyeball.
One solution that companies like North have used is a holographic film on the lens to reflect the image directly into your retina. That has lots of drawbacks: a tiny display and low resolution, for two. But the truly problematic part is simply ensuring the display is aimed right into your eye. You have to be custom-fitted for the North glasses, and the image can disappear entirely if they’re misaligned.
Microsoft doesn’t want any of those problems, so it turned to the same thing it used on the first HoloLens: waveguides. They’re the pieces of glass in front of your eye that are carefully etched so they can reflect the holograms in front of your eyes. The waveguides on the HoloLens 2 are lighter now because Microsoft is using two sandwiched glass plates instead of three.
When you put the whole system together — the lasers, the mirrors, and the waveguide — you can get a brighter display with a wider field of view that doesn’t have to be precisely aimed into your eyes to work. Zulfi Alam, general manager for Optics Engineering at Microsoft, contends that Microsoft is way out ahead with this system and that waveguides are definitely the way to go for mixed reality. “There’s no competition for the next two or three years that can come close this level of fidelity in the waveguides,” he argues.
Do you want a wider field of view? Simple. Just increase the angle of the mirrors that reflect the laser light. A wider angle means a bigger image.
Do you want brighter images? Simple again. Lasers, not to put too fine a point on it, have light to spare. Of course, you have to deal with the fact that waveguides lose a ton of light, but the displays I saw were set to 500 nits and looked plenty bright to me. Microsoft thinks it could go much brighter in the final version, depending on the power draw.
Do you want to see the holograms without getting specifically fitted for your headset? Simple yet again. The waveguide doesn’t require specific fitting or measurement. You can just put the headset on and get going. It also can sit far enough in front of your eyes to allow you to wear whatever glasses you need comfortably.
Simple, simple, simple, right? In truth, it’s devilishly complex. Microsoft had to create an entirely new etching system for the waveguides. It had to figure out how to direct light to the right place in the waveguides nearly photon by photon. “We are simulating every photon that comes from the laser,” Alam says. The light from the lasers isn’t just reflected; it’s split apart in multiple colors and through multiple “pupils” in the display system and then “reconstituted” into the right spot on the waveguides. “Each photon is calculated where it’s expected to go,” Alam says. That takes a ton of computing power, so Microsoft had to develop custom silicon to do all of the calculations on where the photos would go.
And though alignment is much easier with the waveguide, that doesn’t mean it’s perfect. That’s why there are two tiny cameras on the nose bridge, directed at your eyeballs. They will allow the HoloLens 2 to automatically measure the distance between your pupils and adjust the image accordingly. Those cameras will also allow the HoloLens 2 to vertically adjust the image if it gets tilted or if your eyes are not perfectly even. (They are not. Sorry.)
A sort of free benefit of those cameras is that they can also scan your retinas to log you into the HoloLens 2 securely. It runs Windows, after all, and therefore it supports Windows Hello. They also track where you’re looking, which enables some new user interactions I’ll get to below.
A MEMS mirror under a high-speed camera. GIF: Microsoft
Then there’s power: lasers, oscillating mirrors, and custom chips to handle the computing for all of that must chew through battery. But Alam tells me that even with all of that, it still manages to require less power than the alternative. The mirrors oscillate in resonance, so it takes less energy to move them, sort of like they’re the fastest metronomes ever. Lasers are also less lossy than LEDs, and custom silicon can be optimized to its specific task.
”Our evolution is toward a form factor that is truly glasses,” Alam says, “and all these are significant steps in this journey.”
All that tech is impressive for sure, but I don’t want to oversell the image quality. What I was using wasn’t a finished product. I did see a tiny halo around some of the holograms, and they sometimes jumped around a bit. Most of the features based on the nose bridge eye scanners weren’t flipped on yet, either. Still, compared to the first HoloLens, what I saw crossed over the line from “cool demo I’d use for 20 minutes and then be annoyed” to “I could see people using this for a few hours if the software was really useful.”
But if you’re going to use a headset for “a few hours,” it needs to be comfortable enough to leave on in the first place.
Alex Kipman, technical fellow - AI and Mixed Reality, Microsoft.
Here’s how you put the HoloLens 2 on: you put it on like a baseball cap, twist a knob on the back to tighten the headband, and then you’ll start seeing holograms. The end.
It’s much less fiddly than the last HoloLens or any other face-mounted display I’ve ever tried. Because of all the work on the display system, you can skip the extra “fuss with the position to make sure you can see the image” step. The body of the thing is simpler, too. It’s a single band that’s held on with minimal pressure on the back of your head and on your forehead. (There’s an optional top strap if you need it.)
All of that is nice, but it’s pointless if the headset is uncomfortable to wear. And though I never had it on for more than a 20-minute stint, I think it will hold up for longer periods.
Microsoft has a “human factors” lab where it loves to show off its collection of dummy human heads and high-speed cameras. Carl Ledbetter, senior director of design for the Microsoft Device Design Team, walked me through all of the prototypes and material Microsoft tried to get into the final product. He explained how Microsoft experimented with different designs and materials, ultimately landing on carbon fiber to save weight.
”The reality is [we have to] fit kids, adults, men, women, and different ethnicities around the world. Everybody’s head is different,” he says. Microsoft has a database of around 600 heads tracking the shape of the cranium, eye depth, the size and relative position of the nose bridge, and other variations. Ledbetter’s team attached sensors to people’s necks to measure muscle strain, to make sure the center of gravity was right.
The result is that the HoloLens 2 has a more forgiving and flexible fit. It simply does a better job of accommodating basic, physical human realities. You can flip the visor up so it’s out of your field of view so you can make eye contact without removing the headset. The memory foam pad that rests on your forehead is removable and cleanable, and the thermals have been completely redesigned so heat is piped away from your head.
All of that really helps, but the most important thing Microsoft did was move the center of gravity right behind your ears instead of up by your eyes. The HoloLens 2 isn’t really much lighter than the original HoloLens. It feels lighter, though, because it’s balanced more naturally on your head. That balance makes a huge difference. The weight of it is less noticeable and should put less strain on your neck.
Ledbetter moved the weight by literally moving the heaviest part: the main processor and battery are now located in a module that sits on the back of the headset, with wires inside the headband running up to the display board and components in the front. That processor, by the way, is an ARM-based Qualcomm Snapdragon 850, and that’s important because it addresses another basic human reality: we hate when the battery dies, and we hate plugging stuff in. An ARM processor means it can have a smaller battery.
"“If you’re going to be on battery, [it’s] hard to find a product that’s not running ARM today.”"
The original HoloLens ran on an Intel processor, and it ran Windows. Since then, Microsoft has done a ton of work to get Windows working well on ARM. Those efforts are slowly coming to fruition on laptops, but Intel is still the order of the day on those machines where raw speed is usually more important to users than battery life. In general, there’s a tension with Intel. It’s not delivering the lower-power chips that mobile devices demand. Intel even reportedly had to lobby Microsoft to keep the Surface Go on its chips.
So what about the HoloLens 2? Alex Kipman is the person in charge of the whole HoloLens project. He says that “ARM rules in battery-operated devices. The ARM decision became fairly easy. If you’re going to be on battery, [it’s] hard to find a product that’s not running ARM today.”
When I point out that there are plenty of Windows laptops running on batteries using Intel chips, he becomes blunter. “Intel doesn’t even have an SoC [system on chip] right now for these types of products that run on battery. They did have one, the previous version [of the HoloLens] had Cherry Trail, which they discontinued. That decision is a no-brainer.”
For workers, not consumers
The HoloLens 2 is only being sold to corporations, not to consumers. It’s designed for what Kipman calls “first-line workers,” people in auto shops, factory floors, operating rooms, and out in the field fixing stuff. It’s designed for people who work with their hands and find it difficult to integrate a computer or smartphone into their daily work. Kipman wants to replace the grease-stained Windows 2000 computer sitting in the corner of the workroom. It’s pretty much the same decision Google made for Google Glass.
“If you think about 7 billion people in the world, people like you and I — knowledge workers — are by far the minority,” he replies. To him, the workers who will use this are “maybe people that are fixing our jet propulsion engine. Maybe they are the people that are in some retail space. Maybe they’re the doctors that are operating on you in an operating room.”
He continues, saying it’s for “people that have been, in a sense, neglected or haven’t had access to technology [in their hands-on jobs] because PCs, tablets, phones don’t really lend themselves to those experiences.”
"“I’m here to tell you that it’s still not a consumer product.”"
Fair enough. That’s completely in fitting with Microsoft’s new focus on serving corporate and enterprise needs instead of trying to crank out hit consumer products. That was one of my takeaways when I interviewed CEO Satya Nadella last year, and it holds true today. As I wrote then, it’s “a different kind of Microsoft than what we’re used to thinking of. It’s a little less flashy, yes, but it has the benefit of being a lot more likely to succeed.”
Besides, Kipman argues, even the HoloLens 2 isn’t good enough to be a real mass-market consumer technology product. “This is the best, highest watermark of what can be achieved in mixed reality and I’m here to tell you that it’s still not a consumer product,” he says, then continues:
Why is it not a consumer product? It’s not as immersive as you want it to be. It’s more than twice as immersive as the previous one, [but it’s] still not immersive enough for that consumer off the street to go use it. It’s still not comfortable enough … I would say that until these things are way more immersive than the most immersive product, way more comfortable than the most comfortable product, and at or under $1,000, I think people are kidding themselves in thinking that these products are ready.
Kipman says that Microsoft has not participated in the consumer hype cycle for these types of products. “We were not the company that hyped VR. We are certainly not the company that hyped AR. And since we merged the two into the mixed reality and AI efforts, we haven’t hyped either.”
That’s not exactly true. We have seen plenty of demos from Microsoft showing off games — including Minecraft — and other consumer applications for the HoloLens. So this move to the enterprise market is absolutely a pivot.
But it’s a pivot that’s part and parcel with Microsoft’s larger corporate strategy. And just because it’s no longer being positioned as a consumer product doesn’t mean that it’s not an important product — one that Microsoft appears to be committed to and is developing software for.
A better interface on your face
The first HoloLens required users to learn awkward gestures with names like “Air Tap” and “Bloom.” You had to make these really specific hand gestures because that’s all the first HoloLens’ sensors could detect and understand.
The HoloLens 2 can detect and understand much more because of a new array of sensors for reading the room called the Azure Kinect. “Kinect” because that’s the brand for Microsoft’s cameras that can scan rooms, “Azure” because seemingly everything the company does these days is somehow connected to its cloud service and as a further signal that this is a business product, not an Xbox add-on.
“HoloLens 1 is just one big mesh. It’s like dropping a blanket over the real world,” Kipman says. “With HoloLens 2, we go from spatial mapping to semantic understanding of spaces. You understand what’s a couch, what is a human sitting on the couch, what’s the difference between a window and a wall.”
I can’t speak to how well Kinect is actually able to identify objects — Microsoft didn’t demo any of that for us — but it theoretically works because the Azure Kinect sees the room at a higher resolution and because it is hooked up to cloud services that help it figure out what things are.
There’s one aspect where I can definitively say that the higher fidelity is real: it’s able to identify my hand and what it’s doing much more easily. It can track up to 25 points of articulation on both hands in space, which means that you shouldn’t need to use the Air Tap gesture to interact with holograms anymore.
Resizing a hologram with a natural gesture. Footage does not show actual Field of View. Image: Microsoft
In one demo, I paced around a room looking at various holograms that were set up on tables. As I reached my hands in, a box appeared around each one with little grab handles on the edges and corners. I could just reach in and grab the whole box and move the hologram around. I could also just grab one edge to rotate it, or two to resize it. When there was a button, I could stick my finger out and push it. I doubt that it’s accurate enough to, say, let you type on a virtual QWERTY keyboard, but it’s a big step up over the first generation, nonetheless.
Eye tracking also comes into play in how you interact with holograms. The HoloLens 2 can detect where you’re looking and use that information as a kind of user interface. There were demos where I just stared at a little bubble to make it pop into holographic fireworks, but the most useful one was an auto-scroller. The closer to the bottom of the page I got, the faster the words scrolled, but then it stopped when I looked back up.
I didn’t see the full top-level user interface, so I don’t know if that’s changing. But one thing absolutely isn’t: it still runs Windows. It utilizes the shared code in Windows OneCore, which means you won’t get a traditional Windows desktop shell, but you will be able to run any Universal Windows App on it. It also has the necessary drivers to let you connect a keyboard and a mouse to it over Bluetooth if you really want to.
Chaitanya Sareen, the principal group program manager for Microsoft Mixed Reality, explains that they’re trying to “make the machine work around the person versus the other way around.” Sareen calls this “instinctual interaction” as opposed to “intuitive,” since it can piggyback off of what we already do with real objects in the world. “Is anyone born saying ‘There’s going to be a close button [in the upper corner of a window]’? No,” he says. “A lot of interfaces we use are learned.”
"“Is anyone born saying, ‘there’s going to be a close button’? No.”"
Sareen is still thinking through some of the details of what the user interface will be, but the goal is to use many of the natural gestures you picked up as a toddler instead of making you learn a whole new interface language.
Microsoft is also making new software tools available to developers. One of the most important, Dynamic 365 Guides, will be a mixed reality app with templates to create instructions for repairing real-world things like that ATV. Other tools depend on Microsoft’s cloud services. One is Azure Remote Rendering that lets the HoloLens offload some compute load to the cloud. It exists because the HoloLens 2 can only store and render a limited kind of detail for something like a 3D render of an engine locally. With Remote Rendering, some of the detail can come in real time from the cloud, so it displays potentially infinite levels of detail, allowing you to model and interact with the smallest parts of a holographic machine.
Finally, there’s Azure Spatial Anchors. It lets you pin holograms to real places in the world. At a basic level, it’s not all that different from what Apple and Google are already doing in augmented reality: letting multiple devices see and interact with the same virtual object. Microsoft’s ambitions are much grander, though: it wants to create the infrastructure for a “world scale” set of holograms, and it’s building tools that let developers use that infrastructure across platforms, including iOS and Android.
Solving that requires more than just GPS location and object recognition. Kipman talks a lot about distinguishing between identically boring conference rooms that are in the same spot on different floors. Tracking objects in space using optics is famously difficult. Walk in a circle around a building, and your position will drift, so the computer won’t put your ending point at the starting point. It’s a little fuzzy about how far along Microsoft has actually gotten toward solving these problems, but it’s actively working on them.
Alex Kipman believes we are on the precipice of the “third era of computing.” First came PCs with their open architectures, second came phones with walled garden app stores, and now he hopes mixed reality headsets will swing the pendulum back to openness because Microsoft intends to keep the HoloLens open. The HoloLens works with Microsoft’s cloud services, but it would work with other ecosystems, too. Kipman says the HoloLens and Azure are “loosely coupled, but tightly aligned.”
I could do more than quibble with his summary of the history of computing and point out that there’s also quite a history of underdogs calling for openness, but the larger point stands: Microsoft thinks that mixed reality is going to be a Big Deal.
Understanding what Microsoft’s plans lately has required wading through a lot more jargon than it used to. With the HoloLens 2 specifically, expect a lot of discussion about “time-to-value” (how quickly a user can do something useful after getting a device from an employer) and “intelligent edge” (devices with their own computing power nevertheless connected to the cloud).
There’s a cognitive dissonance for regular consumers with all of that talk. Kipman’s protestations to the contrary, there is plenty of hype around the HoloLens 2. It’s just directed at corporations now. Some of it is well-deserved. I think that the HoloLens 2 is a technical marvel. Just because it isn’t being sold as a consumer device doesn’t mean that it’s not also an important piece of technology, something that could change our conception of what a computer should look like.
But we’re used to consumer electronics companies doing their best to put such technical marvels on store shelves, translating that hype into gadgets in our pockets and on our heads.
For the HoloLens 2, the hype isn’t about personal technology. It’s just business.
View all 18 stories
HackerNewsBot debug: Calculated post rank: 82 - Loop: 64 - Rank min: 80 - Author rank: 44
According to reports, the TSMC Nanke 14 Factory has experienced a production shutdown after substandard chemicals used in the manufacturing process ruined tens of thousands of wafers. These defects in…
Article word count: 184
HN Discussion: https://news.ycombinator.com/item?id=19021658
Posted by IMTDb (karma: 863)
Post stats: Points: 98 - Comments: 61 - 2019-01-28T22:27:21Z
\#HackerNews #affect #and #could #factory #interruption #nanke #nvidia #others #production #tsmc
According to reports, the TSMC Nanke 14 Factory has experienced a production shutdown after substandard chemicals used in the manufacturing process ruined tens of thousands of wafers. These defects in the silicon wafers are not able to be detected until after the production run. Companies affected include industry heavyweights such as NVIDIA, MediaTek, Huawei Hisilicon, and some ARM server processors. The 16/12nm process is one of TSMCʼs main sources of revenue. TSMC does not know the financial impact of the loss at this time, but it is expected to be extremely high due to the advanced technology that uses these chips such as NVIDIA GPUs.
More detailed information said that the wafer contamination incident occurred at the Fab 14 fab of Nanke Technology Park. This fab was also one of the affected factories in the virus incident last year. Wafer manufacturing is a very demanding process that requires the use of a wide variety of chemical materials and requires high purity. This accident stems from the fact that imported chemical materials do not meet the requirements, resulting in flaws in the wafers produced.
HackerNewsBot debug: Calculated post rank: 85 - Loop: 193 - Rank min: 80 - Author rank: 54