Thursday, December 25, 2014

Robotic Arms


Double Amputee Becomes First to Control Two Robotic Arms with Only His Mind

Baugh, as Johns Hopkins’ Applied Physics Laboratory reports, is the first “bilateral shoulder-level amputee” to wear and control two modular prosthetic limbs at the same time. The technology has been in development for more than a decade. It was funded by the Defense Advanced Research Projects Agency and was tested on Baugh as part of an experimental program run at the Johns Hopkins Applied Physics Laboratory.
Though his progress is groundbreaking, accomplishing this medical first was no easy feat. Baugh first had to undergo an intense surgery — performed by the institute’s medical director, Albert Chi — that rearranged (in medical terms, “reinnervated”) the nerves in his chest.

“I remember when I first came out from under it, the pain — I don’t even remember the original being that much excruciating pain,” Baugh said in a video released by the university. 

Sony Xperia Z3 scores a record battery life

All bow to the new endurance king: Sony Xperia Z3 scores a record battery life for its category

All bow to the new endurance king: Sony Xperia Z3 scores a record battery life for its category
We are not going to mince words here - Sony is the current king of battery life when it comes to brand-name smartphones. We did our grueling battery benchmark on the Xperia Z3 over the weekend, and it broke all records in its respective category, just as we suspected it would do, given Sony's consistent performance in that department this year.

The phone lasted for nearly nine and a half hours of screen-on time when we ran our tasking battery benchmark, putting it comfortably ahead of any other flagship big-screen phone or phablet we've tested at the moment. 


As mentioned, this is more than any other flagship we've measured so far - phone or phablet - and even more than the Ascend Mate7, which comes equipped with a whopping 4100 mAh juicer, while the Z3 has an "only" 3100 mAh piece in a 7.3mm thin waterproof body. 

The Z3 battery endurance is also better than the one of the Z2, which already has a great battery life, so kudos to Sony here - it managed to squeeze much more life out of a tad smaller battery, compared to the Z2, and make a truly "two-day battery". The charging time, on the other hand, is much longer than the average, at close to four hours with the paltry stock 850 mA charger. 

If you think that what Sony did is by accident, then a simple look at our top ten endurance champs will tell you that this is not the case. Sony phones are currently occupying six of the top ten slots in our battery benchmark ranking, and these range from low-ends, like the Xperia C, through midrangers like the slim T3, all the way to the Xperia Z3 flagship. The "culprit" must be Sony's displays, which might not be the best out there when it comes to color representation and viewing angles, but are apparently using an extremely frugal technology, and screens are the pesky components with the highest battery consumption in our smartphones, so kudos to Sony here.

Battery life

We measure battery life by running a custom web-script, designed to replicate the power consumption of typical real-life usage. All devices that go through the test have their displays set at 200-nit brightness.

hours
Higher is better
  
Sony Xperia Z3
9h 29 min (Excellent)
Huawei Ascend Mate7
9h 3 min (Excellent)
Sony Xperia Z2
8h 10 min (Excellent)
Samsung Galaxy S5
7h 38 min (Good)
HTC One (M8)
7h 12 min (Good)
Apple iPhone 6 Plus
6h 32 min (Average)
LG G3
6h 14 min (Average)
Samsung Galaxy Note 3
6h 8 min (Average)
Motorola Moto X (2014)
5h 45 min (Average)
Apple iPhone 6
5h 22 min (Poor)
Apple iPhone 5s
5h 2 min (Poor)
Google Nexus 5
4h 50 min (Poor)

The Pirate Bay Back End

The Pirate Bay Back End “Hydra” to go Live on January 1, 2015

of providing a torrent search following the raid of The Pirate Bay on December 9th. While users were happy to see this at the time, it still didn't make up for the fact that The Pirate Bay was shut down and there had been no word on when or if it would return.

On the site the isoHunt team states, "As you probably know the beloved Pirate Bay website is gone for now. It’ll be missed. It’ll be remembered as the pilgrim of freedom and possibilities on the web. It’s a symbol of liberty for a generation of internet users.

In its honor we are making the oldpiratebay.org search. We, the isohunt.to team, copied the database of Pirate Bay in order to save it for generations of users. Nothing will be forgotten. Keep on believing, keep on sharing
."

They not only created a search, but have been encouraging anyone who wants to, to create their own copy of The Pirate Bay. Why would so many people want to do that? Because it’s all part of the plan for the return of The Pirate Bay, working in collaboration with isoHunt. Our sources tell us that "hydra" is a code name for the back end of The Pirate Bay, and that “hydra” plans to go live on January 1, 2015.

In Greek mythology, hydra is a serpent with many heads, hence the image that appears on the .se site right now. The structured plan by The Pirate Bay team is to have so many fronts (fake copies) of the Pirate Bay, that it would be nearly impossible for the authorities to get to the real Pirate Bay which will be hidden behind the servers and will be where the magnets, databases and users actually are. User access will be though the copies of TPB, to get to the real Pirate Bay.

Our sources provided us with a copy of the structured layout for the return of TPB, and also tell us that The Pirate Bay will not be returning on the “thepiratebay.se” domain, but plans to return on a new domain, possibly using .xyz. Tiamo, who owns thepiratebay.se domain, is currently in jail.

This is good news for the thousands of Pirate Bay users who have been wondering when their favorite torrent site would be back.

Stay tuned for more news as it becomes available.

Sony Hack

The Case for N. Korea’s Role in Sony Hack


There are still many unanswered questions about the recent attack on Sony Pictures Entertainment, such as how the attackers broke in, how long they were inside Sony’s network, whether they had inside help, and how the attackers managed to steal terabytes of data without notice. To date, a sizable number of readers remain unconvinced about the one conclusion that many security experts and the U.S. government now agree upon: That North Korea was to blame. This post examines some compelling evidence from past such attacks that has helped inform that conclusion.


An image from HP, captioned "North Korean students training for cyberwar."

An image from HP, captioned “North Korean students training for cyberwar.”
\
The last time the world saw an attack like the one that slammed SPE was on March 20, 2013, when computer networks running three major South Korean banks and two of the country’s largest television broadcasters were hit with crippling attacks that knocked them offline and left many South Koreans unable to withdraw money from ATMs. The attacks came as American and South Korean military forces were conducting joint exercises in the Korean Peninsula.

That attack relied in part on malware dubbed “Dark Seoul,” which was designed to overwrite the initial sections of an infected computer’s hard drive. The data wiping component used in the attack overwrote information on infected hard drives by repeating the words “hastati” or “principes,” depending on which version of the wiper malware was uploaded to the compromised host.


Both of those terms reference the military classes of ancient Rome: “hastati” were the younger, poorer soldiers typically on the front lines; the “principes” referred to more hardened, seasoned soldiers. According to a detailed white paper from McAfee, the attackers left a calling card a day after the attacks in the form of a web pop-up message claiming that the NewRomanic Cyber Army Team was responsible and had leaked private information from several banks and media companies and destroyed data on a large number of machines.

Thursday, December 11, 2014

Nexus X to boost Google stock


All eyes have been on Apple recently as they finally unveiled their long awaited iPhone 6 and Apple Watch. But now that the Apple frenzy is winding down, it’s time to talk about the next big event in mobile tech – the upcoming release of a new Google Nexus phone.
google phone

Based on the latest (leaked) photos of the new phone (which will probably be called the Nexus X), it seems that the next generation of Google phones might be moving further into the spotlight and beginning to generate its own buzz. According to the latest rumors, the upcoming phone, which apparently will be manufactured by Motorola, will have a 5.2 inch display, a 2.0 GHz octa-core processor from MediaTek and a 13 mega pixel camera.
The $499.99 price tag clearly makes this one of Google’s more expensive phones and puts it that much closer, price-wise, to the coveted iPhone.
This impressive phone is sure to excite tech savvies and gadget “geeks”, but why should investors care about a Google phone, albeit a relatively impressive one? After all, the sales of the device itself are expected to make only a marginal difference in Google’s total revenue picture so why are investors excited?
Google wants Apple’s users
Although Google’s newest phone isn’t going to be released until sometime around Halloween, with specs that could possibly shadow those of the new iPhone 6, the Nexus X will certainly be no ghost. But what investors are coming to realize is that the Nexus X is a tool that will allow Google to tap into the lucrative market of 800 million iTunes users, the majority of whom use their iPhone or other Apple products exclusively.
As seen below, Apple users statistically have more available income at their disposal and spend more money on apps, music, games or whatever else is available on their mobile devices. And this is what Google is really after; Google wants to get a piece of the crème de la crème pie, by enticing the big spenders to move into their Android ecosystem and spend dollars at Google Play rather than iTunes. This is where the big bucks come in and that is what is getting investors excited.
iphone vs android users
Charting the Targets
As illustrated above, what Google investors really care about is how this new Nexus X will rival iPhone to lure more Apple iTunes users to Google Play.  At the moment,  it seems that investors are indeed optimistic. The targets range from the strongly bullish, who target $750 a share, a 27% upside, to the more conservative bulls who eye $672, a 14% upside. But what is more interesting is that the bears foresee a downside risk of 3% which is, all in all, a rather moderate prediction and a signal that investors expect little downside risk.
goog_sept
Will the Nexus X deliver? Will Google get its hands on those lucrative iTunes users? Only time will tell, but when it does, analysts, it seems, may have already carved out the upside targets.

Alipay adds Touch ID

Chinese mobile payments platform 'Alipay' adds Touch ID support


While iPhone owners in China may not yet have access to Apple Pay, it looks like they can still look forward to using Touch ID with their mobile payments. Alipay, a mobile payments platform run by Alibaba in China, was updated Tuesday with support for Touch ID verification.
The addition of Touch ID verification in Alipay Wallet replaces the app's previous password verification process, and should be available immediately to users.
In addition, a report by China Daily says that Alipay is working on bringing other biometric verification processes to the platform:
Alipay, which has more than 300 million users in China, said it is working on other biometric technologies, which can make it possible for people to confirm payments for a wide variety of goods and services by winking or simply by showing their faces in the near future.
The integration of Apple's fingerprint sensor in Alipay could be the first step towards the two companies working together. Just last month, Alibaba's Executive Vice Chairman Joseph Tsai hinted that Alipay could provide the back-end services for Apple Pay in China.

Monday, December 1, 2014

terrifying reliance on GPS

Our terrifying reliance on GPS, and the need to develop a ground-based alternative


  • GPS satellite, artist render



    • The Global Positioning System, or GPS, has — somewhat surprisingly — found itself at the heart of modern civilization. I don’t think anyone predicted how significant GPS would be today, just 14 years after it was made freely and globally available to civilians and commercial operations in 2000 — but hey, it happened, and there’s no going back. 

       There is no doubt that the ubiquity of GPS across all areas of civilian, commercial, and scientific endeavor has improved the quality of life for billions of people. From self-driving cars to clock synchronization, from geofencing to earthquake prediction to finding a safe walking or cycle route home, GPS really is one of the most vital services. It is a little bit scary, then, that GPS can very easily be jammed by terrorists or other nefarious actors.

      GPS jammers

      Because GPS ultimately relies on very weak radio signals being beamed to you from about 12,600 miles (20,200 km) above Earth, it’s very easy for GPS to fail or be otherwise disrupted. Being underground is one obvious example, but just walking around the streets of a high-rise city can be pretty tough for GPS.

      You can buy a battery-powered GPS jammer online for less than $100
      You can buy a battery-powered GPS jammer online for less than $100
      It’s also very easy to proactively disrupt GPS with a jamming device. Because the signal is so weak, and because the frequency used by GPS is very well known (1559 to 1610MHz), it’s very easy to build a device that blankets an area in RF noise, smothering the GPS signal. (In case you were wondering,

      picking up a GPS signal is like trying to spot a 25-watt light bulb from around 10,000 miles away.)

      There are cheap, pocket-sized GPS jammers that you can buy online that provide a jamming radius of a few meters — but of course, with a little technical knowhow, it would be fairly easy to build a larger device that blocks an entire street or city from using GPS. (Those pocket-sized jammers are regularly used by truck drivers and couriers, incidentally, so that they can evade the ever-watchful gaze of HQ.)

      When GPS fails

      Because so many different technologies and endeavors are backed by GPS, the consequences of GPS failing or being jammed are wildly varied. For someone on foot, a GPS outage might simply be an inconvenience that forces you to stumble around for a little longer in your search for the highest-rated indie coffee shop on TripAdvisor.

       For a self-driving car, or perhaps an ambulance driver trying to find someone’s house, the repercussions of GPS failure are a little more significant. For a seafaring vessels, especially the larger cargo ships, a loss of GPS can mean a complete loss of control — which is a problem if you’re approaching the dock at high speed, or if you end up being stranded in the middle of the ocean.
      And, of course, you really don’t want to lose GPS if you happen to be barreling through the sky at hundreds of miles an hour — like if you’re an airplane, or a cruise missile perhaps.

      So, of course, you need a GPS backup

      Constellation GPS
      The orbiting constellation of GPS satellites
      The driving force behind the creation of the GPS — and the more recent Russian GLONASS, Europe’s Galileo, and China’s COMPASS — is that you can provide global coverage with a constellation of just two dozen satellites.

       GPS consists of 32 orbiting satellites, ensuring that (generally) you can see nine satellites at any given time — more than enough to get an accurate location fix (the minimum is four). Because of the distance between us and the satellites, though, the signals are very easy to jam (and as an aside, they’re not very good for calculating altitude, too).

      A ground-based system would be much more flexible and accurate than GPS — but the trade-off is that you need to blanket the Earth in hundreds or thousands of transmitters, which are expensive to build, hard to maintain, and in some cases just plain inconvenient (how do you monitor the southern Indian or Pacific oceans, for example?)

      Still, the difficulty of building a ground-based global positioning system is far outweighed by the catastrophe that would result from an unexpected attack or prolonged outage of GPS — which is why some nations are now taking a serious look at ground-based alternatives to GPS, just in case a worst-case scenario does actually occur.

      Malware Like Stuxnet and Flame

      This Artist’s Images Integrate Code From Malware Like Stuxnet and Flame



      For years, sophisticated state-created malware like Stuxnet and Regin has fascinated and vexed the security research community and launched a new foreign policy debate. Now it’s infecting the art world, too.
      In an exhibit at Manhattan’s Callicoon Fine Arts gallery running through the next month, artist James Hoff is showing a new series of images that visually integrate code from government-written malware samples like Stuxnet and Flame. As Hoff describes it, he’s used those spying and cyberwar tools to “glitch” the digital images. allowing the malware to add a certain uncontrollable static to his otherwise carefully crafted works of abstract color.
      “It’s about letting the virus be the generative aspect of the process in the studio,” he says. “That variability is very interesting to me. It allows you to get out of your own way of making art and bring randomness into the mix.”
      64_hoffstuxnet2013
      A pair of cufflinks Hoff created that contain a USB stick that stores a piece of music whose notes are based in part on Stuxnet’s code. 
      Hoff creates his malware-glitched works, which have all already been sold, by dropping digital paintings into a hex editor that converts it to text. Then he intersperses randomly chosen chunks of code from malware files, and reconstitutes the data as an image file. The code corrupts the image in unexpected ways, adding chromatic streaks, blotches, and static. In two of the images, Hoff used code from the NSA-created software Stuxnet, built to destroy centrifuges at Iranian nuclear facilities. The other 14 images use code from Flame, which Hoff calls by its alternate name Skywiper, an older NSA-created spyware program.
      The images, which Hoff calls his Skywiper series, are only his latest malware-inspired works. Last year he created a pair of cufflinks that hid USD memory sticks that stored a piece of music based in part on Stuxnet’s code. He also released a series of iPhone ringtones glitched with code from the year 2000 ILOVEYOU virus. 
       The code corrupts the image in unexpected ways, adding chromatic streaks, blotches, and static. In two of the images, Hoff used code from the NSA-created software Stuxnet, built to destroy centrifuges at Iranian nuclear facilities. The other 14 images use code from Flame, which Hoff calls by its alternate name Skywiper, an older NSA-created spyware program.
      Here are a few samples:
      Despite his new focus on state-crafted malware, Hoff insists his work isn’t political. But he does intend the use of state cyberwarfare tools to connect the art to the world at large. “I don’t think of viruses as good or bad. To me, they’re just agents,” he says. “I just want to pull that element into the work. It allows for that kind of reflection, both on a conceptual level and an aesthetics level. The actual code is embedded in the image you see.”

      the visually impaired

      New hopes for the visually impaired

      This artificial vision system helps visually-impaired people recognize characters. A man uses the system to recognize the letter N. (Courtesy of Nidek)


      TOKYO -- Today's digital world has raised risks of eye disease caused by heavy strains due to constant PC and smartphone use.

           Currently, about 300,000 people in Japan are estimated to have lost their eyesight. To help restore their vision, even partially, university researchers and businesses are using the newest technology available to treat vision-impaired patients, something once considered impossible.

           In January this year, a woman in her 60s living in Osaka Prefecture underwent eye surgery at the Osaka University Hospital. She had an artificial retina made from electrodes implanted to restore her sight. This procedure is the latest clinical research. The woman lost most of her vision to a rare disease called retinal pigmentary degeneration eight years ago.

       "I can now see white flowers in the garden and see a rough figure of my husband," she said. Advances in medical technology have helped her see again, even if only to a small degree.

      Reactivating cells

      Osaka University has been working on ways to tackle this disease. Under its method, a blind person wears a pair of glasses equipped with small charge coupled devices, which sends image data shot by the CCD to an image processing device around the neck. Then, the image data is converted and signals are transmitted to the intact retina cells via a 6 millimeter-square electrode chip implanted in the sclera, the outer part of the retina known commonly as the white of the eye.

           Normally, image data captured by the crystalline lenses in our eyes is first sent to the retina via nerves before being passed onto the brain. However, decoding of image data fails when many retina cells are lost. To solve this problem, Osaka University researchers have come up with ways to generate different image data signals and transmit them to the remaining retina cells. As a result, these cells are reactivated and send image data to the brain.

           The university hospital has already performed three such operations and plans to obtain official approval as a medical device in a few years. "This method allows patients to see figures and movements only vaguely at the moment. We hope to create artificial retina technology that can help them live independently in the future," said Takashi Fujikado, an ophthalmology professor of Osaka University who is in charge of the research project.

           To achieve greater clarity of vision, a research team led by Jun Ota, professor of photonic device science at the Nara Institute of Science and Technology, has been trying to increase the accuracy of electrode chips. Simply increasing the number of electrodes makes chips bigger. This would require more wires to link chips and complicate implant surgery.

      Working together
      Thus, professor Ota and his fellow scientists are working to develop new, micro-size electrodes. Working with Nidek, an Aichi Prefecture-based developer and manufacturer of ophthalmic devices, they have created an ultrafine processing technology for making electrodes small enough to fit a greater number onto a single chip.

           This makes it possible to embed minuscule electrodes in a single electrode array. Currently, more than 1,000 electrodes can be embedded in a chip of several sq.-millimeters, 20 times more than Osaka University's method, according to the research team. The use of this new technology and the artificial retina being developed by Osaka University will make it possible to implant these tiny chips to electrically stimulate the retina.

           Ota and his research team will study whether these chips will corrode or deteriorate inside the body, aiming to put the technology to use as soon as possible. "Visual cells process more than 100 million pixels. We are not sure if we can replicate the colors [seen by impaired eyes] fully, but this may greatly improve the quality of life for visually impaired people so that they could read books or go out," said Ota. (Nikkei)

      Intel announces 32-layer 3D NAND

      Intel announces 32-layer 3D NAND chips, plans for larger-than-10TB SSDs

      NAND flash silicon die


      It’s been clear for several years that three-dimensional NAND die stacking, in which chip layers are oriented vertically as opposed to horizontal planar structures was the way forward for next-generation chip designs. Until now, Samsung has been the only company to take that plunge, but that’s going to change in 2015 with the launch of Intel’s own solution in 2015.

      According to Intel, its 256-gigabit MLC NAND chips will consist of 32 layers, and will also be available in a 384-gigabit TLC configuration. Intel is claiming that its own 256Gb die sets efficiency records, but as Anandtech reports, this depends on how you count — Samsung has consciously chosen to use a 32-layer 86Gbit die to minimize its die foot print, as opposed to maximizing capacity. This gives Samsung’s V-NAND the smallest die size of any product currently on the market, with size being a very important factor in many markets.

      Moving back up the nanometer ladder

      Intel, like Samsung, is expected to announce that it uses a much larger process node for its 3D NAND. In Samsung’s case, it uses a 40nm process for 3D NAND, despite the fact that its working on 14nm planar technology for both logic and DRAM devices. Intel and Micron have already launched 16nm 2D NAND, but the fundamental characteristics of flash mean that device reliability decreases as process nodes shrink.

      Read: SanDisk’s collosal 4TB SSD: Does this mean SSDs will soon provide more storage than hard drives?

      Intel's 3D NAND


      Moving back up to 40nm NAND gave Samsung enough headroom to launch the fastest, most reliable SSD on the market today — the 850 Pro — and it’s expected to give Intel a similar kick. Intel isn’t willing to put a strict timetable on its plans for a 10TB SSD (and you can expect any such device to debut with an enterprise-class price tag), but Samsung has talked about stacking over a hundred layers of NAND per die — and if Intel hits equivalent densities, a 10TB SSD should be possible within five years.

      Endurance-vs-Longevity

      As the chart shows, moving back up to the 32nm node would more than double reliability as compared to 16nm levels. It also allows for less error correction, at least in theory, and might even enable increased drive densities. TLC drives have struggled to improve their Program/Erase (P/E) cycles, but a 40nm TLC drive might be able to offer the same reliability as an MLC drive at 16nm. The bottom line is, moving back to old nodes offers better options if density can still be scaled.

      Ideally, researchers would find ways to adopt older process node technology for more than just NAND flash. The difficulty of building ever-smaller structures has gotten to the point that we’re bumping up against the physical limits of reality — next-generation EUV lithography has bogged down in no small part because the degree of perfection required at every stage of the manufacturing process is orders of magnitude above what the chip industry was previously required to achieve.

      Intel is likely to roll out 3D NAND for enterprise SSDs first, but we should see consumer drives launching thereafter. With both Samsung and Intel on-board with the new technology, manufacturers like Toshiba are likely to follow in fairly short order.

      Global access to the web’s best content

      Global access to the web’s best content on your mobile device. Anonymous. Uncensored. Free. Outernet




      No one can choose what they look like, where they’re born, or where they’re raised. All you can try to do is educate yourself as best you can and try to improve yourself on a daily basis. However, not everyone is given the same privilege in all parts of the world. The Internet has given us the ability to learn a myriad of new things every day, but not every has access to it.

      If you wish you could help the world become more connected, then why not take an active role in helping that happen? The Lantern receives radio waves broadcast by Outernet from space. This device turns these signals into files such as videos, pictures, news articles, and much more. It’s like having a library in your pocket. Whenever you want to use it, activate the Wi-Fi hotspot, and connect with any Wi-Fi enabled device. It’s free to use, and not only has a ton of information, but is updated constantly.
      This isn’t something you’ll need to worry about plugging in, as it has solar panels to help it stay juiced up. Seeing that this essentially gives you access to the internet offline, this could mean great things to all the people in the world who are willing to learn but don’t have the means. You’ll be able to download information for free without censorship (aside from parental controls for your kids), which is kind of a big deal. Getting the Lantern will cost you $99, but the company that came up with this idea has their eyes set on a much larger goal.

      4.3 billion people on Earth - the combined populations of Europe and the United States TIMES FOUR - do not have access to the Internet. The majority of humanity does not have access to the enormous library of useful information that we take for granted. Books, courseware, weather information, disaster updates, uncensored news, entertainment, language learning software. What if there was another way to give that to everyone on Earth for free?
      Enter Outernet.


      Current System: "A Library In Every Village"


      Outernet's current signal requires a dish and enables higher download rates for mass consumption on a school or village level. This campaign is to turn on a new signal that will blanket the entire Earth with free data and can be received on a device that fits in your hand.

      Proposed System: "A Library In Every Pocket"


      Information should be a public good available to everyone. This new broadcast frequency will enable global coverage to a pocket-sized device. That device is called "Lantern.


      What Could Our Species Accomplish?


      Imagine if every person could learn for free. 
      Imagine if there were no more dark patches of people without access to information.
      Imagine if the lights came on.
      Never before in human history has there been an opportunity to raise the bar for everyone at once so dramatically. Outernet is such an inflection point. 
      Currently, Outernet is broadcasting from geostationary satellites thousands of miles above the Earth. This is a Ku-band signal with a footprint which reaches all of North America, Europe, the Middle East, and North Africa. We launched this first phase of Outernet on August 11, 2014 to test various elements of the technology, allow users to build receivers and validate our work while providing us with useful feedback. 
      We are ready for our next step.

      How Does Outernet Work?


      Below is a nifty infographic that the L.A. Times put together about Outernet. You can see the original with its animation here


      How We Decide What Gets Broadcast



      The Outernet broadcast is made up of three components:
      The Core Archive is a collection of content selected by Outernet because of its importance to humanity. This will include news, educational content, and disaster updates, when applicable. It is publicly viewable, dynamically edited, and subject to continuous discussion and review.
      We are currently compiling a first draft of the Core Archive, which we will publish in November. Here is a sample of what it will include:
      Wikipedia, Linux distributions, such as Ubuntu, Fedora, and Arch Educational courseware of EdX and Khan Academy, Project Gutenberg books, Open Source Ecology plans, audio and video literacy lessons, regular digests of the Bitcoin Blockchain, compiled news bulletins, disaster updates, OpenStreetMap, commodities information, weather information, and much more. The entire Core Archive will be less than 1 TB. Pared down versions will be considerably smaller.
      The Queue is content that is requested and voted on by global citizens. Anyone can request content and we are constantly endeavoring to create more channels for users to submit requests. Content request channels include SMS, Facebook Zero, Twitter, and the Outernet website. A combination of popularity and origin of the request (higher priority being given to areas where there are greater barriers to making a request) determine broadcast eligibility and frequency. 
      Sponsored content can be submitted by anyone. Think of it like an ad in a newspaper; you pay to have your content distributed alongside organically selected material. Sponsored content is flagged as such and its sponsor is identified. To add your content, select the $25 reward above. After IndieGoGo, this feature will remain available on a per MB basis.
      For a more in-depth discussion on this, see this piece in Quartz by Outernet's Archive Editor, Thane Richard.

      Outernet's Funding



      Outernet is supported by the Media Development Investment Fund, an impact investment fund which provides financing to ventures that provide the news, information and debate that people need to build free, thriving societies. Since 1996, MDIF has invested $130 million in 105 different companies in more 36 countries. Outernet is thrilled to be supported by such a knowledgable and impactful organization.



      published by Derlich-Herman Systems Engineer @ DCS.
      scan and get hold of me