Underreview for latest technology gadgets and worldwide technologies, AI, Machine Learning, Neural networks, Artificial intelligence, Tensorflow, Deep Learning, DeepAI, Python,JavaScript,OpenCv, ChatBot, Natural Language Processing,Scikit-learn
Tuesday, 31 October 2023
How much is enough to support those who maintain open source?
from ComputerWeekly.com https://ift.tt/3TFsRky
We mustn’t let the return to offices kill the growth of women in tech
from ComputerWeekly.com https://ift.tt/UgoWG5H
The Morning After: Apple reveals new MacBook Pros, M3 chips and a new iMac
During its Scary Fast product event last night, Apple officially debuted its new M3, M3 Pro and M3 Max chips. The company is positioning the M3 chips as major upgrades over its M1 hardware — if you bought an M2 system, you’re probably not itching for a replacement just yet.
The M3’s GPU is the biggest leap forward, delivering new features, like hardware-accelerated ray tracing and mesh shading, enabling more realistic lighting and better geometry handling. If you’re into chip architecture and other fun endeavors, the M3 chips are also notable for being the first PC chips built on a three-nanometer process — both the M1 and M2 families are based on a 5nm process. This means more transistors packed into the same space, which helps with power efficiency, as well as providing better overall performance. The M3 series will feature in the revamped MacBook Pro 14-inch and 16-inch (more on those below), as well as the 24-inch iMac.
That new chip will make the new iMac up to twice as fast as its predecessor, but there aren’t too many upgrades elsewhere in the latest Mac. Apple is sticking with a 4.5K Retina display, for instance. There are some handy changes on the connectivity front, now with support for Wi-Fi 6E and Bluetooth 5.3 The new iMac starts at $1,299 and ships on November 7.
— Mat Smith
You can get these reports delivered daily direct to your inbox. Subscribe right here!
The biggest stories you might have missed
Sweeping White House executive order takes aim at AI’s toughest challenges
Sony’s WH-1000XM5 ANC headphones drop to $330
The best cheap phones for 2023
Avatar: Frontiers of Pandora’s adventurous spirit might just win you over
Apple kills off the 13-inch MacBook Pro
But it has new 14- and 16-inch models, don’t worry.
Apple’s updated line of 14- and 16-inch MacBook Pros features a range of new M3 chips and a new Space Black chassis. Was that the spooky part of Apple’s event?
The 14-inch MBP with a base M3 processor will cost $1,599 — the first time the 14-inch laptop has hit that low of a price. The M3 Pro iteration will still cost you $1,999, and prices go up from there for M3 Max options. Meanwhile, a base 16-inch MacBook Pro with an M3 Pro chip will have the same $2,499 starting price as its M2 Pro predecessor. Alas, the 13-inch version is no more. Farewell, Touch Bar.
Lenovo Smart Paper review
A solid e-ink tablet spoiled by the cost.
In the last few years, we’ve seen Amazon get into e-ink scribes, while startups like ReMarkable have carved out their own niche with capable hardware for a reasonable price. Lenovo, having dabbled with e-ink on devices like the Yoga Book, has joined the fray with a dedicated device, the Smart Paper. While the product hasn’t yet launched in the US, the Smart Paper has launched elsewhere, including the UK. At around $400 (or £500 in the UK), it’s expensive. The hardware is impressive (and useful), but it’s all tainted by a subscription service that demands even more money.
X won’t pay creators for tweets that get fact checked with community notes
The ‘slight change’ is the latest attempt to address misinformation.
X will no longer pay creators for tweets promoting misinformation. Elon Musk said the company is making a “slight change” to its monetization program, and tweets fact-checked via community notes will no longer be eligible for payouts.
The latest change comes as researchers, fact-checkers and journalists have raised the alarm about the amount of viral misinformation spreading on X amid the ongoing conflict in Israel and Gaza. Recent analysis from NewsGuard, a nonprofit that tracks the spread of misinformation, found 74 percent of “the most viral posts on X advancing misinformation about the Israel–Hamas war are being pushed by ‘verified’ X accounts.”
This article originally appeared on Engadget at https://ift.tt/8N4UupVfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/8N4UupV
Monday, 30 October 2023
The Morning After: Samsung pays tribute to its flip phone past with limited-edition foldable
Samsung has unveiled the Galaxy Z Flip 5 Retro, a limited-edition foldable that pays homage to the SGH-E700 (AKA the SGH-E715 in the US), which came out 20 years ago in 2003. It has the same indigo blue and silver color combo as the original and a few special widgets, but it’s otherwise the same foldable flip phone from earlier this year. This special edition will go on sale in Korea and several countries in Europe, but not the US.
The SGH-E700 was Samsung’s first mobile phone with an integrated antenna and became a certified hit, selling more than 10 million units. Weirdly, this isn’t even the first time Samsung has tugged at nostalgia strings with this phone: in 2007, Samsung effectively reissued the same phone with new radios as a nostalgia play, even though it was only four years old at the time.
— Mat Smith
You can get these reports delivered daily direct to your inbox. Subscribe right here!
The biggest stories you might have missed
How to customize the double tap gesture on Apple Watch
The best gadgets for your pets
Is streaming video even still worth it?
What the evolution of our own brains can tell us about the future of AI
What we got right (and wrong) about Elon Musk’s takeover of Twitter
One year later, it’s X.
Exactly one year has passed since Elon Musk, fresh off a months-long legal battle that forced him to buy the company, strolled into Twitter headquarters carrying a sink. We weren’t entirely sure what to expect. But there was no shortage of predictions about just how messy and chaotic Twitter might become under Musk’s leadership. The biggest twist, however, might be Meta making its Twitter rival, Threads, into a viable (if flawed) alternative. Karissa Bell walks through what did (and didn’t) happen when Musk took charge.
Threads is working on an API for developers
Threads aims to be the place for public conversations online.
Threads was missing a lot of features users would expect from a service similar to Twitter (now X) when it launched. But over the past few months, it has added more new features, but as it still doesn’t have an API, third-party developers can’t create features with hooks into their services. For example, local transport agencies can’t automatically post service alerts when a train is delayed.
According to Instagram chief Adam Mosseri, though, Threads is working on an API for developers — he just has some reservations. He’s concerned the API’s launch could mean “a lot more publisher content and not much more creator content.” Mosseri may be hinting at the early days of Threads, where people’s feeds were dominated by brands and accounts with (presumably) social media staffers posting up a storm.
Google’s default search engine status cost it $26 billion in 2021
The figure was revealed in the DOJ’s antitrust trial against the search giant.
Google VP Prabhakar Raghavan testified the company paid $26.3 billion in 2021 for maintaining default search engine status and acquiring traffic. Most of that likely went to Apple, in order to remain the default search option on iPhone, iPad and Mac.
Raghavan, who was testifying as part of the DOJ’s ongoing antitrust suit against the company, said Google’s search advertising made $146.4 billion in revenue in 2021, which puts the $26 billion it paid for default status in perspective. The executive added that default status made up the lion’s share of what it pays to acquire traffic.
How to watch Apple’s Scary Fast event
The night time is the right time for new iMacs and laptops.
Apple’s holding another streaming event today, Monday October 30, at 8PM ET. Yes, that’s in the dead of night, and you can watch the stream on YouTube, on Apple’s website and on Apple TV devices. Here’s what you can expect to see.
This article originally appeared on Engadget at https://ift.tt/ISralpHfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/ISralpH
FDM Group partners with ISACA to boost cyber training programme
from ComputerWeekly.com https://ift.tt/nlPEIJd
More office days leads to fewer women, says Nash Squared
from ComputerWeekly.com https://ift.tt/PXksgue
Sunday, 29 October 2023
New report reveals details on the three M3 chips Apple may launch Monday night
Apple is planning to debut three M3 chips at its “Scary Fast” Mac event Monday night, according to Bloomberg’s Mark Gurman — the M3, M3 Pro and M3 Max. The event is set to kick off at 8 PM ET and is expected to bring multiple hardware announcements. Gurman previously reported that the company is prepping a new 24-inch iMac which could make an appearance tomorrow, along with upgraded MacBook Pros running the new M3 series.
In the Power On newsletter, Gurman writes that the standard M3 chip is likely to sport an eight-core CPU and 10-core GPU like the M2, but with improvements to performance speed and memory. He also notes the company is testing multiple configurations for both the M3 Pro and M3 Max chips. We may see an M3 Pro with 12-core CPU/18-core GPU and the option for a pricier 14-core CPU with a 20-core GPU. Meanwhile, the M3 Max could come with 16 CPU cores and either 32 or 40 GPU cores.
We won’t know anything for sure until Apple's unusually-timed October event starts tomorrow night. Thankfully, that’s not a long time to wait. Join us here to watch as it all unfolds.
This article originally appeared on Engadget at https://ift.tt/x8FmVf0from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/x8FmVf0
What the evolution of our own brains can tell us about the future of AI
The explosive growth in artificial intelligence in recent years — crowned with the meteoric rise of generative AI chatbots like ChatGPT — has seen the technology take on many tasks that, formerly, only human minds could handle. But despite their increasingly capable linguistic computations, these machine learning systems remain surprisingly inept at making the sorts of cognitive leaps and logical deductions that even the average teenager can consistently get right.
In this week's Hitting the Books excerpt, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, AI entrepreneur Max Bennett explores the quizzical gap in computer competency by exploring the development of the organic machine AIs are modeled after: the human brain.
Focusing on the five evolutionary "breakthroughs," amidst myriad genetic dead ends and unsuccessful offshoots, that led our species to our modern minds, Bennett also shows that the same advancements that took humanity eons to evolve can be adapted to help guide development of the AI technologies of tomorrow. In the excerpt below, we take a look at how generative AI systems like GPT-3 are built to mimic the predictive functions of the neocortex, but still can't quite get a grasp on the vagaries of human speech.
Excerpted from A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains by Max Bennett. Published by Mariner Books. Copyright © 2023 by Max Bennett. All rights reserved.
Words Without Inner Worlds
GPT-3 is given word after word, sentence after sentence, paragraph after paragraph. During this long training process, it tries to predict the next word in any of these long streams of words. And with each prediction, the weights of its gargantuan neural network are nudged ever so slightly toward the right answer. Do this an astronomical number of times, and eventually GPT-3 can automatically predict the next word based on a prior sentence or paragraph. In principle, this captures at least some fundamental aspect of how language works in the human brain. Consider how automatic it is for you to predict the next symbol in the following phrases:
-
One plus one equals _____
-
Roses are red, violets are _____
You’ve seen similar sentences endless times, so your neocortical machinery automatically predicts what word comes next. What makes GPT-3 impressive, however, is not that it just predicts the next word of a sequence it has seen a million times — that could be accomplished with nothing more than memorizing sentences. What is impressive is that GPT-3 can be given a novel sequence that it has never seen before and still accurately predict the next word. This, too, clearly captures something that the human brain can _____.
Could you predict that the next word was do? I’m guessing you could, even though you had never seen that exact sentence before. The point is that both GPT-3 and the neocortical areas for language seem to be engaging in prediction. Both can generalize past experiences, apply them to new sentences, and guess what comes next.
GPT-3 and similar language models demonstrate how a web of neurons can reasonably capture the rules of grammar, syntax, and context if it is given sufficient time to learn. But while this shows that prediction is part of the mechanisms of language, does this mean that prediction is all there is to human language? Try to finish these four questions:
-
If 3x + 1 = 3, then x equals _____
-
I am in my windowless basement, and I look toward the sky, and I see _____
-
He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and _____
-
I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally _____
Here something different happens. In the first question, you likely paused and performed some mental arithmetic before being able to answer the question. In the other questions, you probably, even for only a split second, paused to visualize yourself in a basement looking upward, and realized what you would see is the ceiling. Or you visualized yourself trying to catch a baseball a hundred feet above your head. Or you imagined yourself one hour past Chicago and tried to find where you would be on a mental map of America. With these types of questions, more is happening in your brain than merely the automatic prediction of words.
We have, of course, already explored this phenomenon—it is simulating. In these questions, you are rendering an inner simulation, either of shifting values in a series of algebraic operations or of a three-dimensional basement. And the answers to the questions are to be found only in the rules and structure of your inner simulated world.
I gave the same four questions to GPT-3; here are its responses (responses of GPT-3 are bolded and underlined):
-
If 3x + 1 = 3 , then x equals 1
-
I am in my windowless basement, and I look toward the sky, and I see a light, and I know that it is a star, and I am happy.
-
He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and caught it. It was a lot of fun!
-
I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally get to see the Pacific Ocean.
All four of these responses demonstrate that GPT-3, as of June 2022, lacked an understanding of even simple aspects of how the world works. If 3x + 1 = 3, then x equals 2/3, not 1. If you were in a basement and looked toward the sky, you would see your ceiling, not stars. If you tried to catch a ball 100 feet above your head, you would not catch the ball. If you were driving to LA from New York and you’d passed through Chicago one hour ago, you would not yet be at the coast. GPT-3’s answers lacked common sense.
What I found was not surprising or novel; it is well known that modern AI systems, including these new supercharged language models, struggle with such questions. But that’s the point: Even a model trained on the entire corpus of the internet, running up millions of dollars in server costs — requiring acres of computers on some unknown server farm — still struggles to answer common sense questions, those presumably answerable by even a middle-school human.
Of course, reasoning about things by simulating also comes with problems. Suppose I asked you the following question:
Tom W. is meek and keeps to himself. He likes soft music and wears glasses. Which profession is Tom W. more likely to be?
1) Librarian
2) Construction worker
If you are like most people, you answered librarian. But this is wrong. Humans tend to ignore base rates—did you consider the base number of construction workers compared to librarians? There are probably one hundred times more construction workers than librarians. And because of this, even if 95 percent of librarians are meek and only 5 percent of construction workers are meek, there still will be far more meek construction workers than meek librarians. Thus, if Tom is meek, he is still more likely to be a construction worker than a librarian.
The idea that the neocortex works by rendering an inner simulation and that this is how humans tend to reason about things explains why humans consistently get questions like this wrong. We imagine a meek person and compare that to an imagined librarian and an imagined construction worker. Who does the meek person seem more like? The librarian. Behavioral economists call this the representative heuristic. This is the origin of many forms of unconscious bias. If you heard a story of someone robbing your friend, you can’t help but render an imagined scene of the robbery, and you can’t help but fill in the robbers. What do the robbers look like to you? What are they wearing? What race are they? How old are they? This is a downside of reasoning by simulating — we fill in characters and scenes, often missing the true causal and statistical relationships between things.
It is with questions that require simulation where language in the human brain diverges from language in GPT-3. Math is a great example of this. The foundation of math begins with declarative labeling. You hold up two fingers or two stones or two sticks, engage in shared attention with a student, and label it two. You do the same thing with three of each and label it three. Just as with verbs (e.g., running and sleeping), in math we label operations (e.g., add and subtract). We can thereby construct sentences representing mathematical operations: three add one.
Humans don’t learn math the way GPT-3 learns math. Indeed, humans don’t learn language the way GPT-3 learns language. Children do not simply listen to endless sequences of words until they can predict what comes next. They are shown an object, engage in a hardwired nonverbal mechanism of shared attention, and then the object is given a name. The foundation of language learning is not sequence learning but the tethering of symbols to components of a child’s already present inner simulation.
A human brain, but not GPT-3, can check the answers to mathematical operations using mental simulation. If you add one to three using your fingers, you notice that you always get the thing that was previously labeled four.
You don’t even need to check such things on your actual fingers; you can imagine these operations. This ability to find the answers to things by simulating relies on the fact that our inner simulation is an accurate rendering of reality. When I mentally imagine adding one finger to three fingers, then count the fingers in my head, I count four. There is no reason why that must be the case in my imaginary world. But it is. Similarly, when I ask you what you see when you look toward the ceiling in your basement, you answer correctly because the three-dimensional house you constructed in your head obeys the laws of physics (you can’t see through the ceiling), and hence it is obvious to you that the ceiling of the basement is necessarily between you and the sky. The neocortex evolved long before words, already wired to render a simulated world that captures an incredibly vast and accurate set of physical rules and attributes of the actual world.
To be fair, GPT-3 can, in fact, answer many math questions correctly. GPT-3 will be able to answer 1 + 1 =___ because it has seen that sequence a billion times. When you answer the same question without thinking, you are answering it the way GPT-3 would. But when you think about why 1 + 1 =, when you prove it to yourself again by mentally imagining the operation of adding one thing to another thing and getting back two things, then you know that 1 + 1 = 2 in a way that GPT-3 does not.
The human brain contains both a language prediction system and an inner simulation. The best evidence for the idea that we have both these systems are experiments pitting one system against the other. Consider the cognitive reflection test, designed to evaluate someone’s ability to inhibit her reflexive response (e.g., habitual word predictions) and instead actively think about the answer (e.g., invoke an inner simulation to reason about it):
Question 1: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
If you are like most people, your instinct, without thinking about it, is to answer ten cents. But if you thought about this question, you would realize this is wrong; the answer is five cents. Similarly:
Question 2: If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?
Here again, if you are like most people, your instinct is to say “One hundred minutes,” but if you think about it, you would realize the answer is still five minutes.
And indeed, as of December 2022, GPT-3 got both of these questions wrong in exactly the same way people do, GPT-3 answered ten cents to the first question, and one hundred minutes to the second question.
The point is that human brains have an automatic system for predicting words (one probably similar, at least in principle, to models like GPT-3) and an inner simulation. Much of what makes human language powerful is not the syntax of it, but its ability to give us the necessary information to render a simulation about it and, crucially, to use these sequences of words to render the same inner simulation as other humans around us.
This article originally appeared on Engadget at https://ift.tt/5I7TgiKfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/5I7TgiK
Saturday, 28 October 2023
NASA is launching a rocket on Sunday to study a 20,000-year-old supernova
A sounding rocket toting a special imaging and spectroscopy instrument will take a brief trip to space Sunday night to try and capture as much data as it can on a long-admired supernova remnant in the Cygnus constellation. Its target, a massive cloud of dust and gas known as the Cygnus Loop or the Veil Nebula, was created after the explosive death of a star an estimated 20,000 years ago — and it’s still expanding.
NASA plans to launch the mission at 11:35 PM ET on Sunday October 29 from the White Sands Missile Range in New Mexico. The Integral Field Ultraviolet Spectroscopic Experiment, or INFUSE, will observe the Cygnus Loop for only a few minutes, capturing light in the far-ultraviolet wavelengths to illuminate gasses as hot as 90,000-540,000 degrees Fahrenheit. It’s expected to fly to an altitude of about 150 miles before parachuting back to Earth.
The Cygnus Loop sits about 2,600 light-years away, and was formed by the collapse of a star thought to be 20 times the size of our sun. Since the aftermath of the event is still playing out, with the cloud currently expanding at a rate of 930,000 miles per hour, it’s a good candidate for studying how supernovae affect the formation of new star systems. “Supernovae like the one that created the Cygnus Loop have a huge impact on how galaxies form,” said Brian Fleming, principal investigator for the INFUSE mission.
“INFUSE will observe how the supernova dumps energy into the Milky Way by catching light given off just as the blast wave crashes into pockets of cold gas floating around the galaxy,” Fleming said. Once INFUSE is back on the ground and its data has been collected, the team plans to fix it up and eventually launch it again.
This article originally appeared on Engadget at https://ift.tt/joZ5dBefrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/joZ5dBe
Instagram head says Threads is working on an API for developers
Threads was missing a lot of features users would expect from a service similar to Twitter's (now X's) when it launched. Over the past few months, however, it has been been rolling out more and more new features to give users a more robust experience, including polls, an easy way to post GIFs and the ability to quote posts on the web. Still, since it doesn't have an API, third-party developers can't conjure features specific to their services that would make the social network a more integral part of people's everyday lives. An example of that is local transportation agencies being able to automatically post service alerts when a train is delayed. According to Instagram chief Adam Mosseri, though, Threads is working on an API for developers — he just has concerns about how it's going to be used.
As first reported by TechCrunch, Mosseri responded to a conversation on the platform about having a TweetDeck-like experience for Threads. In a response to a user saying that Threads has no API yet, the executive said: "We're working on it." He added that he's concerned that the API's launch could mean "a lot more publisher content and not much more creator content," but he's aware that it "seems like something [the company needs] to get done."
Mosseri previously said that Threads won't amplify news, which may have been disappointing to hear for publishers and readers looking to leave X. Instead, he said, Threads wants to "empower creators in general." More recently, in an AMA he posted on the platform, Mosseri said that that his team's long-term aspiration is for Threads to become "the de facto platform for public conversations online," which means being both culturally relevant and big in terms of user size. He said he believes Threads has a chance of surpassing X, but he knows that his service has a long way to go. For now, he keeps his team focused on making people's experience better week by week.
Mark Zuckerberg recently announced that Threads has "just under" 100 million monthly active users. Like Mosseri, he is optimistic about its future and said that there's a "good chance" it could reach 1 billion users over the next couple of years.
This article originally appeared on Engadget at https://ift.tt/d7e5jpZfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/d7e5jpZ
Friday, 27 October 2023
UK regulators confident they are ready for AI safety governance
from ComputerWeekly.com https://ift.tt/LuXhmFA
The Morning After: Leica’s new camera was built to fight disinformation
In this dizzy world of digital tricks and image manipulation where you can erase objects and alter images with a smartphone swipe, Leica wants photos taken on its camera to leave a digital footprint, known as a Content Credential. The M11-P also has a 60-megapixel sensor, and the typical understated layout and Leica styling.
Content Credentials capture metadata about the photograph – like the camera used, location, time and more— and locks those in a manifest that is wrapped up with the image using a cryptographic key. Those credentials can be verified online and whenever someone subsequently edits that photo, the changes are recorded to an updated manifest, bundled with the image and updated in the Content Credentials database.
Users can click on an icon to pull up all of this historical manifest information, and is being described as a "nutrition label" for photographs.
– Mat Smith
You can get these reports delivered daily direct to your inbox. Subscribe right here!
The biggest stories you might have missed
Fox Sports will use drones in World Series broadcasts for the first time
Google updates Maps with a flurry of AI features including 'Immersive View for routes'
How Recteq’s dual-chamber and griddle designs put a unique spin on pellet grills
The best mesh Wi-Fi router systems of 2023
FL Studio 21.2 can separate the bass, vocals and drums from your favorite songs
Google expands its bug bounty program to target generative AI attacks
What to expect from Apple's Scary Fast event
M3-powered MacBook Pros and new iMacs.
On All Hallows’ Eve… eve, Apple is hosting another event. This one is dubbed “Scary Fast,” which is a good indicator that Apple will have some powerful new hardware (or chips) to show off. It's been nearly 17 months since Apple's M2 system on a chip (SoC) debuted. With many chip rivals following an annual cadence for their chipsets, it may be time for the M3.
Most rumors suggest a new iMac, possibly powered by the new chip, and the 24-inch iMac is well overdue for a refresh. Or maybe the company will scare us all with even more subscription price increases.
The Xiaomi 14 Pro packs a faster Leica camera and comes in a titanium edition
For now, it’s only headed to China.
Xiaomi has only just introduced its 13T phone series outside of Asia, and the company is already revealing more flagship phones back in China. The Xiaomi 14 Pro has a 6.73-inch screen offering an industry-leading peak brightness of 3,000 nits and variable refresh rate from 1Hz to 120Hz. Its main camera has a variable aperture ranging from f/1.42 to f/4.0, a telephoto camera capable of 3.2x zoom, and a 50-megapixel f/2.2 camera for ultra-wide shots. Xiaomi 14 Pro starts from 4,999 yuan (around $680) but if you want the titanium edition, it'll cost you 6,499 yuan (around $890).
Spotify looks set to overhaul its royalty model next year
It could implement minimum play thresholds.
Spotify's royalty model will get a massive revamp next year to give "working artists" a bigger cut. It’s planning three changes, starting with establishing a minimum number of annual streams a track must reach to generate royalties. While these tracks make up a tiny percentage of music on the platform, their royalties still cost Spotify tens of millions of dollars a year. The second change is detecting illegal activity, like using AI tools to repeatedly stream tracks and artificially boost play counts. The third part is aimed at "non-music noise content," such as white noise and binaural beats. Many noise tracks on Spotify are only 31 seconds long because the platform pays for every play over half a minute. The listener then naturally leads onto another track, and possibly another royalty check. But not for much longer.
This article originally appeared on Engadget at https://ift.tt/DIfv7Zofrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/DIfv7Zo
Hertz decides to slow down its EV expansion
In 2021, Hertz announced that it was going to order 100,000 electric vehicles from Tesla by the end of 2022. Turns out the car rental company is far from being able to reach that number still, and it may take a while to get to 100,000, if it even gets there, because it's slowing down its plans to electrify its fleet. During the company's third-quarter earnings call (PDF), CEO Stephen Scherr said Hertz's "in-fleeting of EVs will be slower than [its] prior expectations."
Hertz reported a 13 percent margin for the quarter, which Scherr said would've been "several points higher" if not for the cost challenges associated with EVs. One of the factors that affected the company's margins was depreciation, compounded by the one-third drop in retail prices of the electric cars in its fleet. Tesla had implemented several price cuts over the past year, slashing the Model S and X prices by nearly 20 percent in September.
In addition, the CEO said that EVs are costing Hertz "about twice in terms of damage cost repair than a conventional internal combustion engine vehicle." He said the company is working directly with Tesla to look at its cars' performance and lower the risk of damage, as well when it comes parts procurement and labor. The company disclosed in its earnings report that 80 percent of its EVs is made up of Tesla vehicles, which means it has 35,000 Tesla in its fleet out of 50,000. As CNBC explains, EVs come with their own set of maintenance challenges, potentially brought about by their heavier weight. Aside from those two factors, moving a portion of its EV fleet from ridesharing use to leisure had affected its margins, as well. Hertz rents Tesla EVs to Uber and Lyft drivers, and it's now planning to move the vehicles it removed from the pool back to its ridesharing business.
Scherr said Hertz remains committed to its long-term plan to electrify its fleet, but it's going to pace itself while it looks for solutions to its EV-related issues. The CEO talked about how taking on EVs by other manufacturers like GM could address some of the problems it's facing. He expects Hertz to be able to purchase them at an "appreciably lower price point" than the prices it paid for its Tesla vehicles. He also thinks that those cars "will likely speak to lower incidence of damage," as well as to "a lower cost of parts and labor." GM and other traditional automakers have a broad parts supply network nationwide established over the decades, which will make it easier — and potentially cheaper due to aftermarket availability — to procure components.
This article originally appeared on Engadget at https://ift.tt/fk9ycZTfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/fk9ycZT
Thursday, 26 October 2023
The Morning After: Meta’s Threads reaches almost 100 million active users
Meta’s Threads continues to grow, all while the service it aped, X, continues to splutter and fall apart. Mark Zuckerberg said that Threads currently has “just under” 100 million monthly active users and that the app could reach 1 billion users in the next couple of years.
Threads picked up 100 million sign-ups in its first week, with easy ways to create an account from your existing Instagram profile. However, engagement dropped off amid complaints about limited functionality and feeds flooded with unwanted posts from brands and users with big audience numbers on Instagram. I was not interested in the piecemeal thoughts of startup execs with a podcast. Shocking, I know.
Meta has since steadily added new features, and engagement seems to have rebounded in recent weeks as Elon Musk continues to make unpopular changes to X, like stripping headlines from links and well, all the other things.
– Mat Smith
You can get these reports delivered daily direct to your inbox. Subscribe right here!
The biggest stories you might have missed
Black Friday 2023: The best early deals
The White House will reportedly reveal a ‘sweeping’ AI executive order on October 30
iOS 17.1 is here with improvements to AirDrop and new flair for Apple Music
Apple will reportedly bring ANC to its 'regular' AirPods next year
X is rolling out an audio and video calling feature nobody asked for
What did we just say?
X (formerly known as Twitter) has begun rolling out yet another feature nobody asked for. Now, users will have the option to call each other via audio and video calls on the platform. This doesn't come as a total surprise, as CEO Linda Yaccarino previously confirmed that video chat would be coming to the social media site back in August. The best explanation for the addition is Elon Musk’s aim to make X the “everything” app – a one-stop shop for multiple features and services.
DJI's Osmo Pocket 3 camera features a 1-inch sensor and a rotating display
It also offers 4K 120p video and ActiveTrack 6.0 stabilization.
DJI's Osmo Pocket 3 gimbal camera has arrived with major updates over the previous model, adding a much larger 1-inch sensor that should greatly improve image quality. It also packs a new 2-inch display with 4.7 times the area of the last model. That said, It's also significantly more expensive than the Pocket 2 was at launch. It costs $520 in the US, $170 more than the Pocket 2.
Apple TV+ prices have doubled in just over a year
Apple One, Arcade and News+ plans are now more expensive too.
The price of Apple TV+ is going up by $3 per month to $10. The annual TV+ plan has risen from $69 to $99. Apple Arcade is now $7 per month instead of $5. As for Apple News+, that'll now run you $13 per month for a standalone subscription, up from $10. The cost of an Apple TV+ subscription previously went up from $5 per month to $7 in October 2022, meaning that the price of the service has doubled in just over 12 months.
TikTok's first live 'global music event' will feature Cardi B and Charlie Puth
In The Mix will take place in Arizona on December 10.
TikTok In The Mix will take place in Mesa, Arizona on December 10 – the first global live music event from the video platform. The headliners are Cardi B, Niall Horan, Anitta and Charlie Puth, with surprise guests and performances by emerging artists. Followers of the four headliners will get presale codes to buy In The Mix tickets starting on October 27. The general sale will start on November 2 and TikTok will stream the event live on its app too.
This article originally appeared on Engadget at https://ift.tt/rVIEFnkfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/rVIEFnk
Wednesday, 25 October 2023
Learning from Google: A Computer Weekly Downtime Upload podcast
from ComputerWeekly.com https://ift.tt/ClamNOy
The Honda Prelude returns as a concept EV
Honda has brought its iconic Prelude back in the form of a new concept EV, a two-door coupe that looks surprisingly ready for production, the company announced. No details about the powertrain were revealed, but Honda said it represents a preview of the company's future EV lineup and demonstrates its commitment to driver-focused performance.
The Prelude concept was revealed at the end of Honda's Tokyo Mobility Show presentation without many details, other than the appearance. It resembles the latest Honda Civic, particularly in the front end. It's less angular though, retaining the smoother lines that later versions of the original Prelude were known for. Other notable visual cues include bulging fenders, regular side mirrors (not cameras), a small spoiler and blacked out windows. The latter probably means that the concept doesn't have much in the way of an interior yet.
The original Prelude put Honda on the map for front-wheel-drive performance, famously coming in second to the Porsche 944 in a 1984 Car and Driver shootout (while beating a Ferrari 308, Lotus Esprit, two other Porsches and a Toyota Supra in the process). It was discontinued in 2001, with the final US model offering 200 horsepower.
Honda was very slow, reluctant even, to embrace electric cars — bringing the breakthrough Honda E to market was an uphill battle. And that vehicle likely won't get a follow-up, as Honda said earlier this year that it would focus on SUVs instead. However, CEO Toshihiro Mibe made clear that the Prelude concept represents the company's way forward in terms of sporty EVs.
"The word 'prelude' means an 'introductory or preceding performance,'" he said. "This model will become the prelude for our future models which will inherit the 'joy of driving' into the full-fledged electrified future and embody Honda's unalterable sports mindset. The Prelude Concept is a specialty sports model that will offer [an] exhilarating experience."
Those comments suggest that the company will eventually built the Prelude, or something like it. That would be a way for Honda to move into EVs while still embracing its enthusiast performance heritage.
This article originally appeared on Engadget at https://ift.tt/hOyIuXL
from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/hOyIuXL
Qualcomm's new audio chip uses Wi-Fi to massively extend headphone range
In addition to the Snapdragon 8 Gen 3 and the Snapdragon X Elite, Qualcomm has also introduced the S7 and S7 Pro Gen 1 at the Snapdragon Summit in Hawaii. The company said its new chips deliver six times the compute power of their predecessor's, along with on-device AI capabilities. More intriguing, perhaps, is the S7 Pro's micro-power Wi-Fi connectivity, which will apparently allow users to "walk around a home, building or campus while listening to music or making calls."
As The Verge notes, the chip uses Qualcomm's Expanded Personal Area Network (XPAN) technology that can automatically switch a device's connection. When a user strays too far from their phone while their earbuds are connected to it via Bluetooth, for instance, XPAN switches the connection to a Wi-Fi access point. It can deliver 96kHz lossless audio via earbuds, Qualcomm's Dino Bekis told the publication, and it works with 2.4, 5 and 6GHz bands. Bekis also said that users only have to click on a prompt once to connect their earbuds powered by the chip to their Wi-Fi.
Outside of the S7 Pro's Wi-Fi connectivity, the platforms' on-board AI enable better responsiveness to the listener's environment if they want to hear ambient sounds. But if they want to block out their environment completely, the chips are supposed to be capable of Qualcomm's "strongest ever ANC performance" regardless of earbud fit.
These features will only be enabled when headsets, earbuds and speakers powered by the S7 and S7 Pro are paired with devices equipped with the new Snapdragon 8 Gen 3 mobile platform and Snapdragon X Elite, though. That means we won't be seeing products with the new sound chips on the market anytime soon. When they do come out, they'll most likely be meant for Android devices, seeing as Apple has its own ecosystem.
This article originally appeared on Engadget at https://ift.tt/p815hUdfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/p815hUd
Tuesday, 24 October 2023
New tool lets artists fight AI image bots by hiding corrupt data in plain sight
From Hollywood strikes to digital portraits, AI's potential to steal creatives' work and how to stop it has dominated the tech conversation in 2023. The latest effort to protect artists and their creations is Nightshade, a tool allowing artists to add undetectable pixels into their work that could corrupt an AI's training data, the MIT Technology Review reports. Nightshade's creation comes as major companies like OpenAI and Meta face lawsuits for copyright infringement and stealing personal works without compensation.
University of Chicago professor Ben Zhao and his team created Nightshade, which is currently being peer reviewed, in an effort to put some of the power back in artists' hands. They tested it on recent Stable Diffusion models and an AI they personally built from scratch.
Nightshade essentially works as a poison, altering how a machine-learning model produces content and what that finished product looks like. For example, it could make an AI system interpret a prompt for a handbag as a toaster or show an image of a cat instead of the requested dog (the same goes for similar prompts like puppy or wolf).
Nightshade follows Zhao and his team's August release of a tool called Glaze, which also subtly alters a work of art's pixels but it makes AI systems detect the initial image as entirely different than it is. An artist who wants to protect their work can upload it to Glaze and opt in to using Nightshade.
Damaging technology like Nightshade could go a long way towards encouraging AI's major players to request and compensate artists' work properly (it seems like a better alternative to having your system rewired). Companies looking to remove the poison would likely need to locate every piece of corrupt data, a challenging task. Zhao cautions that some individuals might attempt to use the tool for evil purposes but that any real damage would require thousands of corrupted works.
This article originally appeared on Engadget at https://ift.tt/yCwlPp8from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/yCwlPp8
Chevy offers $1,400 to Bolt EV owners who endured lower charging levels
GM has announced that it will pay $1,400 to owners of 2020-2022 Bolt EVs and EUVs who endured a recall that limited range to 80 percent for a significant period of time, Electrek has reported. It's effectively an upfront payment to customers as part of an expected class action settlement.
"GM is announcing a compensation program for 2020-22 Bolt EV/EUV owners upon installation of the final advanced diagnostic software as part of the original battery recall," a spokesperson wrote in a statement. "Owners are eligible to receive a $1,400 Visa eReward card upon installation. This applies to Bolt EV/EUV owners in the US only. We’re grateful to our customers for their patience and understanding."
Owners must install a "software final remedy" by December 31, 2023 and sign a legal release — those who decline will have to wait for the class action lawsuit to play out. If the settlement ends up being more than $1,400, those who accept the payment will still receive the difference.
It seems like Chevy's Bolt EVs (and larger EUVs that came along in 2021) have never not had problems with their batteries. The 2017-2019 models had serious defects that could cause fires, forcing GM to recall them and install special software, reducing maximum charge levels to 90 percent.
The 2020-2022 models affected by the lawsuit used new battery chemistry with a different issue that could also cause a fire when the car was fully, or nearly fully charged. GM issued a recall for those models as well, installing diagnostic software that would reduce maximum charging levels to 80 percent (cutting range from about 259 miles to 207 miles). The software will eventually either warn customers that their battery pack needs to be replaced, or automatically return the maximum charge to 100 percent.
The problem is, the vehicles needed to reach 6,200 miles of use before the final assessment. That could be years for some buyers, and GM mandated that owners complete the diagnostic by March 2025 in order to qualify for an extended warranty or replacement battery, if needed.
GM announced earlier this year that it was discontinuing the Bolt EV amid the company's shift to the Ultium platform, possibly because it felt the name had been sullied by the battery issues. Following an outcry, though, it backtracked and said that a next-gen Bolt was in the works — showing that people still liked what the Bolt stood for (a practical, cheap EV with decent range) despite the recalls.
Presumably, any potential settlement would cover owners who effectively lost the full and expected use of their vehicle during the period. If you're part of the recall, you should receive a letter soon with more information and a unique PIN to access their site — more information is available here.
This article originally appeared on Engadget at https://ift.tt/fRY0rO8from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/fRY0rO8
Monday, 23 October 2023
The Morning After: North Korean workers got remote IT jobs to help finance weapons programs
The United States Justice Department says North Korean nationals have been working remotely for US companies, using fake IDs. The money they make is apparently being funneled to fund weapons of mass destruction programs. At a news conference in St. Louis, Missouri, the FBI alleged that thousands of individuals have moved to countries such as Russia and China and posed as freelance IT workers living in the US.
They used false information for emails, payment platforms and websites — sometimes even paying Americans to use their Wi-Fi and setting up proxy computers from those connections. The money being made here was substantial, too. The FBI has apparently collected around $1.5 million in money earned by these workers during previously sealed seizures in October 2022 and January 2023.
– Mat Smith
The biggest stories you might have missed
Instagram's latest test feature turns users' photos into stickers for Reels and Stories
Twitch will allow simulcasting to competitor streaming platforms
Universal Audio's SC-1 condenser microphone comes with new modeling software
NVIDIA's latest AI model helps robots perform pen spinning tricks as well as humans
You can get these reports delivered daily direct to your inbox. Subscribe right here!
Engadget Podcast: Breaking down Andreessen’s “Techno-Optimist Manifesto”
Also, we discuss why Spider-Man 2 on the PS5 is a worthy sequel.
Venture capitalist Marc Andreessen has wrapped up his pro-tech worldview in a massive tome, the Techno-Optimist Manifesto. Andreessen claims, “technology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential,” and he goes on to vilify anyone who dares to step in the way of “progress.” To break down this document, we’re joined by tech critic Paris Marx. We also dive into Spider-Man 2 on PS5.
Super Mario Bros. Wonder review
The joy of pure imagination.
So Mario has always consumed mushrooms, but in the latest Mario game on Nintendo Switch, it seriously feels like the plumber (and his friends) are dabbling in hallucinogens. This has opened the creative floodgates for level design and gameplay dynamics, twisting the usual 2D platform game in weird and wonderful ways. The game also marks the first Mario title with a new voice actor for the protagonist.
Jon Stewart's Apple TV+ show reportedly ends following clash over AI and China
The show was abruptly canceled.
The Problem With Jon Stewart isn't returning for a third season at Apple TV+. It was supposed to begin filming for another eight episodes within the next couple of weeks, but Apple and Stewart reportedly decided to part ways before it could start. According to The New York Times, the publications said the parties didn't see eye to eye, with Stewart apparently telling production staff that Apple executives had raised concerns about certain subjects they planned to cover, particularly China and artificial intelligence. Neither party has issued a statement.
Blizzard plans to raffle off a human-blood-infused PC
Diablo IV players have to donate to make it happen.
To celebrate the release of Diablo IV’s new season, Season of Blood, Blizzard has launched a month-long blood drive in the US that’ll unlock in-game rewards. Once donations reach 666 quarts altogether, players will be able to enter sweepstakes for “a custom liquid-cooled PC infused with real human blood.” A typical blood donation is 1 pint, so it’ll take a little over 1,300 donations to hit the final goal. Get giving, you creeps.
This article originally appeared on Engadget at https://ift.tt/Dqy9BTbfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/Dqy9BTb
Taxpayers to fund a further £150m for Post Office IT scandal
from ComputerWeekly.com https://ift.tt/RlV0dw2
Tinder will let your family nag you and play virtual matchmaker
Tinder has rolled out a new feature dubbed “Tinder Matchmaker” that will allow users’ family and friends to access the dating app and make recommendations for potential matches. The matchmakers do not need to have a Tinder profile to view or suggest possible pairings. Hypothetically, that means anyone from your grandmother to your ex-boyfriend could help you select a new profile to match with.
A Tinder user will need to launch a “Tinder Matchmaker session” either directly from a profile card or within the app’s settings. If you see a potential match, you can share a unique link with up to 15 individuals in a 24-hour period. Once a matchmaker gets a link, they can log into Tinder or continue as a guest.
A matchmaker will gain access to profiles they can “like” and if they do, it will appear as a recommendation for the original Tinder user to see. The matchmaker’s abilities are limited though. They can't send messages or actually swipe right on the profiles in question – ultimately, the Tinder user will decide whether or not to match with another.
“For years, singles have asked their friends to help find their next match on Tinder, and now we're making that so easy with Tinder Matchmaker," Melissa Hobley, Tinder's Chief Marketing Officer says on the new feature.
Bumble has a similar offering, where a user can recommend a profile to a friend through a private link that only they can open within the dating app. However, it’s more geared for one-on-one sharing compared to Tinder Matchmaker. Hinge, another key competitor, tried launching a separate Hinge Matchmaker app in 2017. Matchmakers on the Hinge spinoff were supposed to suggest potential pairings based on who the individuals knew personally from Facebook. That secondary app didn't last for Hinge – the app is no longer available.
Tinder’s matchmaker feature is just the latest offering from the company designed to entice more users to engage with the app in new ways. Verification on Tinder got a boost with video selfies, incognito mode finally was introduced earlier this year and the company just started letting Tinder users specify gender pronouns and non-monogamous relationship types.
This article originally appeared on Engadget at https://ift.tt/cxujg5tfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/cxujg5t
Jaguar TCS Formula E racing team making breakthroughs that will fine-tune electric road vehicles
from ComputerWeekly.com https://ift.tt/gQrMlqG
Sunday, 22 October 2023
Instagram's latest test feature turns users' photos into stickers for Reels and Stories
Instagram is testing a sticker creation feature that will let users make custom stickers from their own photos — and other users’, in some cases — and pop them into Reels or Stories. While Meta has been going all in on prompt-based, AI-generated stickers lately, this tool is something much simpler. It’ll just select the subject of a photo and remove the background, creating a free-floating sticker that can be placed over other content.
Adam Mosseri gave a brief demonstration of how it’ll work in a video shared to his broadcast channel. He also said that, in addition to creating stickers from photos saved on your phone, users will be able to make them from “eligible images you see on Instagram.” Mosseri didn’t share any further details on that, but it suggests users will be able to opt in to making their pictures stickerable.
It’s still just a test and hasn’t rolled out to all users, so we’ll see what that actually looks like in time. The platform last week started testing a new polling feature, too, which will show up in the comments section under feed posts.
This article originally appeared on Engadget at https://ift.tt/Z3zfaoAfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/Z3zfaoA
Apple's rumored October Mac launch may happen after all
This month has been filled with conflicting rumors of an Apple product launch that either will or won’t happen, but Bloomberg’s Mark Gurman now says the October event is on, and it could bring a long overdue iMac upgrade. In the Power On newsletter, Gurman reports that sources close to Apple said a Mac launch is in the works for this month. Based on current retail supplies and shipping dates for certain models, Gurman suggests we could see a new iMac and possibly some new MacBook Pros.
“Apple retail stores are in short supply of the iMac, as well as the 13-inch MacBook Pro and high-end MacBook Pro — two other models that may be due for a refresh,” Gurman wrote, noting that current shipping estimates for these models show delays until November. That, plus the timing of the company’s earnings call — in November this year, instead of October — suggests Apple has something planned. Gurman speculates the launch event may take place on October 30 or 31.
The 24-inch M1 iMac came out in April 2021 and hasn’t been updated since, making it a good candidate for a refresh. The 13-inch M2 MacBook Pro, which was released in June 2022, is also due for an upgrade.
This article originally appeared on Engadget at https://ift.tt/mQLYyEFfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/mQLYyEF
NASA's John Mather keeps redefining our understanding of the cosmos
Space isn't hard only on account of the rocket science. The task of taking a NASA mission from development and funding through construction and launch — all before we even use the thing for science — can span decades. Entire careers have been spent putting a single satellite into space. Nobel-winning NASA physicist John Mather, mind you, has already helped send up two.
In their new book, Inside the Star Factory: The Creation of the James Webb Space Telescope, NASA's Largest and Most Powerful Space Observatory, author Christopher Wanjek and photographer Chris Gunn take readers on a behind the scenes tour of the James Webb Space Telescope's own journey from inception to orbit. Weaving examinations of the radical imaging technology that enables us to peer deeper into the early universe than ever before with profiles of the researchers, advisors, managers, engineers and technicians that made it possible through three decades of effort. In this week's Hitting the Books excerpt, a look at JWST project scientist John Mather and his own improbable journey from rural New Jersey to NASA.
Excerpted from “Inside the Star Factory: The Creation of the James Webb Space Telescope, NASA's Largest and Most Powerful Space Observatory” Copyright © 2023 by Chris Gunn and Christopher Wanjek. Used with permission of the publisher, MIT Press.
John Mather, Project Scientist
— The steady hand in control
John Mather is a patient man. His 2006 Nobel Prize in Physics was thirty years in the making. That award, for unswerving evidence of the Big Bang, was based on a bus-sized machine called COBE — yet another NASA mission that almost didn’t happen. Design drama? Been there. Navigate unforeseen delays? Done that. For NASA to choose Mather as JWST Project Scientist was pure prescience.
Like Webb, COBE — the Cosmic Background Explorer — was to be a time machine to reveal a snapshot of the early universe. The target era was just 370,000 years after the Big Bang, when the universe was still a fog of elementary particles with no discernable structure. This is called the epoch of recombination, when the hot universe cooled to a point to allow protons to bind with electrons to form the very first atoms, mostly hydrogen with a sprinkling of helium and lithium. As the atoms formed, the fog lifted, and the universe became clear. Light broke through. That ancient light, from the Big Bang itself, is with us today as remnant microwave radiation called the cosmic microwave background.
Tall but never imposing, demanding but never mean, Mather is a study in contrasts. His childhood was spent just a mile from the Appalachian Trail in rural Sussex County, New Jersey, where his friends were consumed by earthly matters such as farm chores. Yet Mather, whose father was a specialist in animal husbandry and statistics, was more intrigued by science and math. At age six he grasped the concept of infinity when he filled up a page in his notebook with a very large number and realized he could go on forever. He loaded himself up with books from a mobile library that visited the farms every couple of weeks. His dad worked for Rutgers University Agriculture Experiment Station and had a laboratory on the farm with radioisotope equipment for studying metabolism and liquid nitrogen tanks with frozen bull semen. His dad also was one of the earliest users of computers in the area, circa 1960, maintaining milk production records of 10,000 cows on punched IBM cards. His mother, an elementary school teacher, was quite learned, as well, and fostered young John’s interest in science.
A chance for some warm, year-round weather ultimately brought Mather in 1968 to University of California, Berkeley, for graduate studies in physics. He would fall in with a crowd intrigued by the newly detected cosmic microwave background, discovered by accident in 1965 by radio astronomers Arno Penzias and Robert Wilson. His thesis advisor devised a balloon experiment to measure the spectrum, or color, of this radiation to see if it really came from the Big Bang. (It does.) The next obvious thing was to make a map of this light to see, as theory suggested, whether the temperature varied ever so slightly across the sky. And years later, that’s just what he and his COBE team found: anisotropy, an unequal distribution of energy. These micro-degree temperature fluctuations imply matter density fluctuations, sufficient to stop the expansion, at least locally. Through the influence of gravity, matter would pool into cosmic lakes to form stars and galaxies hundreds of millions of years later. In essence, Mather and his team captured a sonogram of the infant universe.
Yet the COBE mission, like Webb, was plagued with setbacks. Mather and the team proposed the mission concept (for a second time) in 1976. NASA accepted the proposal but, that year, declared that this satellite and most others from then on would be delivered to orbit by the Space Shuttle, which itself was still in development. History would reveal the foolishness of such a plan. Mather understood immediately. This wedded the design of COBE to the cargo bay of the unbuilt Shuttle. Engineers would need to meet precise mass and volume requirements of a vessel not yet flown. More troublesome, COBE required a polar orbit, difficult for the Space Shuttle to deliver. The COBE team was next saddled with budget cuts and compromises in COBE’s design as a result of cost overruns of another pioneering space science mission, the Infrared Astronomical Satellite, or IRAS. Still, the tedious work continued of designing instruments sensitive enough to detect variations of temperatures just a few degrees above absolute zero, about −270°C. From 1980 onward, Mather was consumed by the creation of COBE all day every day. The team needed to cut corners and make risky decisions to stay within budget. News came that COBE was to be launched on the Space Shuttle mission STS-82-B in 1988 from Vandenberg Air Force Base. All systems go.
Then the Space Shuttle Challenger exploded in 1986, killing all seven of its crew. NASA grounded Shuttle flights indefinitely. COBE, now locked to Shuttle specifications, couldn’t launch on just any other rocket system. COBE was too large for a Delta rocket at this point; ironically, Mather had the Delta in mind in his first sketch in 1974. The team looked to Europe for a launch vehicle, but this was hardly an option for NASA. Instead, the project managers led a redesign to shave off hundreds of pounds, to slim down to a 5,000-pound launch mass, with fuel, which would just make it within the limits of a Delta by a few pounds. Oh, and McDonnell Douglas had to build a Delta rocket from spare parts, having been forced to discontinue the series in favor of the Space Shuttle.
The team worked around the clock over the next two years. The final design challenge was ... wait for it ... a sunshield that now needed to be folded into the rocket and spring-released once in orbit, a novel approach. COBE got the greenlight to launch from Vandenberg Air Force Base in California, the originally desired site because it would provide easier access to a polar orbit compared to launching a Shuttle from Florida. Launch was set for November 1989. COBE was delivered several months before.
Then, on October 17, the California ground shook hard. A 6.9-magnitude earthquake struck Santa Cruz County, causing widespread damage to structures. Vandenberg, some 200 miles south, felt the jolt. As pure luck would have it, COBE was securely fastened only because two of the engineers minding it secured it that day before going off to get married. The instrument suffered no damage and launched successfully on November 18. More drama came with the high winds on launch day. Myriad worries followed in the first weeks of operation: the cryostat cooled too quickly; sunlight reflecting off of Antarctic ice played havoc with the power system; trapped electrons and protons in the Van Allen belts disrupted the functioning of the electronics; and so on.
All the delays, all the drama, faded into a distant memory for Mather as the results of the COBE experiment came in. Data would take four years to compile. But the results were mind-blowing. The first result came weeks after launch, when Mather showed the spectrum to the American Astronomical Society and received a standing ovation. The Big Bang was safe as a theory. Two years later, at an April 1992 meeting of the American Physical Society, the team showed their first map. Data matched theory perfectly. This was the afterglow of the Big Bang revealing the seeds that would grow into stars and galaxies. Physicist Stephen Hawking called it “the most important discovery of the century, if not of all time.”
Mather spoke humbly of the discovery at his Nobel acceptance speech in 2006, fully crediting his remarkable team and his colleague George Smoot, who shared the prize with him that year. But he didn’t downplay the achievement. He noted that he was thrilled with the now broader “recognition that our work was as important as people in the professional astronomy world have known for so long.”
Mather maintains that realism today. While concerned about delays, threats of cancellation, cost overruns, and not-too-subtle animosity in the broader science community over the “telescope that ate astronomy,” he didn’t let this consume him or his team. “There’s no point in trying to manage other people’s feelings,” he said. “Quite a lot of the community opinion is, ‘well, if it were my nickel, I’d spend it differently.’ But it isn’t their nickel; and the reason why we have the nickel in the first place is because NASA takes on incredibly great challenges. Congress approved of us taking on great challenges. And great challenges aren’t free. My feeling is that the only reason why we have an astronomy program at NASA for anyone to enjoy — or complain about — is that we do astonishingly difficult projects. We are pushing to the edge of what is possible.”
Webb isn’t just a little better than the Hubble Space Telescope, Mather added; it’s a hundred times more powerful. Yet his biggest worry through mission design was not the advanced astronomy instruments but rather the massive sunshield, which needed to unfold. All instruments and all the deployment mechanisms had redundancy engineered into them; there are two or more ways to make them work if the primary method fails. But that’s not the only issue with a sunshield. It would either work or not work.
Now Mather can focus completely on the science to be had. He expects surprises; he’d be surprised if there were no surprises. “Just about everything in astronomy comes as a surprise,” he said. “When you have new equipment, you will get a surprise.” His hunch is that Webb might reveal something weird about the early universe, perhaps an abundance of short-lived objects never before seen that say something about dark energy, the mysterious force that seems to be accelerating the expansion of the universe, or the equally mysterious dark matter. He also can’t wait until Webb turns its cameras to Alpha Centauri, the closest star system to Earth. What if there’s a planet there suitable for life? Webb should have the sensitivity to detect molecules in its atmosphere, if present.
“That would be cool,” Mather said. Hints of life from the closest star system? Yes, cool, indeed.
This article originally appeared on Engadget at https://ift.tt/6ewhqsmfrom Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/6ewhqsm