NASA Sending Microsoft’s HoloLens to Space

New Picture (3)On Sunday, June 28th 2015, two Microsoft HoloLens augmented reality headsets were sent to the International Space Station (ISS) via a SpaceX Dragon resupply mission, which carried various supplies and other much-needed cargo to the final frontier.

The HoloLens headsets are part of a new NASA project called ‘Sidekick’, and they’re supposed to help facilitate communications between the astronauts and their technicians back here on Earth, reduce their training time and help them perform their tasks more efficiently. The need for verbal instructions will supposedly be replaced, in part, by the use of specially designed holographic animations.

The Sidekick headsets will eventually be able to perform in two basic modes; the first mode, aka ‘Remote Expert Mode’, will be purely communicational, as it will connect the astronaut to a technician on Earth via Skype. It is important to note that the technician will be able to jot down information or draw sketches that the astronaut will be able to see, which can be very helpful, as some more technically oriented tasks require more than just verbal communications in order to be completed successfully. However, the aforementioned headsets are not meant to work in ‘Remote Expert Mode’; this particular mode will be tested at a later time, with 2 new headsets being sent into space by NASA.

This time around, the headsets will help test the second mode, aka ‘Procedure Mode’, which involves the HoloLens’ ability to project 3d holographic models and animations over the astronauts’ real-time viewing. This exciting new technology is very innovative and practical, and it is bound to be very helpful to the astronauts when they are required to perform certain tasks, as it will probably be way easier for them to complete their work, if they can actually watch a holographic representation of how it is supposed to be done, in real time!

Regarding the project, NASA’s Sam Scimemi said: ‘HoloLens and other virtual and mixed reality devices are cutting edge technologies that could help drive future exploration and provide new capabilities to the men and women conducting critical science on the International Space Station. This new technology could also empower future explorers requiring greater autonomy on the journey to Mars.’

HoloLens designer Kipman said, in a statement: ‘Sidekick is a prime example of an application for which we envisioned HoloLens being used – unlocking new potential for astronauts and giving us all a new perspective on what is possible with holographic computing’

NASA and Microsoft’s next project, called ’OnSIght’, will revolve around enabling scientists to utilize advanced holographic computing technology in order to work on various projects in real-time, by controlling rovers on Mars, as if they were driving them in a virtual reality environment!

Holographic technology has advanced by leaps and bounds during the past few years, and we are glad to see that it is being put to good use, for the benefit of all mankind. As long as such important technological advancements are utilized in a meaningful way, driven by our species’ sense of wanderlust and our need to explore the unknown, there can actually be hope for the future!

Amazon’s Prime Air Drone

New PictureBack in 2013, Amazon’s CEO appeared on ‘60 minutes’ in order to present to the world the company’s innovative idea of using an actual drone to deliver their customer’s products right at their doorstep! This exciting, hi-tech delivery system was named ‘Amazon Prime Air’, and it seemed to have come right out of a sci-fi film!

Amazon’s ‘Prime Air’ project seemed very ambitious, since it promised swift, safe deliveries for parcels under 5 pounds, which should be small enough to fit inside the small unmanned aerial vehicle’s storage, in only 30 minutes! This amazing technological breakthrough would be available to all customers who lived within a 10 mile radius of an Amazon center or affiliated store.

 

 

There was only one problem, unmanned aerial vehicles were not yet legal in the US. Even though the technology existed and was ready to be tested, the law system was falling behind, and it didn’t seem to pick up the pace until March 2015, when the US Federal Aviation Administration (FAA) finally gave Amazon permission to initiate testing using a specific drone. That model, however, had already become obsolete by then, as the FAA took more than 6 months to approve of the testing! “We don’t test it anymore. We’ve moved on to more advanced designs that we already are testing abroad” said Amazon’s vice president, and, even after he FAA eventually gave the company permission to fly a newer model within the Continental US (one month later), he stated that the rule set applied by the agency was “more restrictive than the rules and approvals by which [Amazon] conducted outdoor testing in the UK and elsewhere”.New Picture (1)

The FAA, realising that it should pick up the pace regarding drone permits, decided to create a blanket flying permission, according to which all unmanned vehicles that weighed 55 pounds or less, which would fly below 200 ft at all times during the daylight hours and remained well away from airports, were eligible for a permit.

Recently, Amazon demanded that the US Federal Government started working on a legal system that would ‘lay the law’ regarding the issue of commercial drone flight regulations, in order to avoid having to deal with each and every state’s take on the matter. In response, Michael Whitaker, the FAA Deputy Administrator announced that the regulations regarding the operation of unmanned aerial vehicles will “be in place within a year”!

Amazon’s stance seems rather logical, especially if we take into account the company’s past inability to follow various state tax legislations, which were apparently too ‘confusing’ for them; an offense for which the company offered various (unsuccessful) explanations!

It would certainly be nice for companies who are looking to work with commercial drones to only have a clear-cut, stable set of laws to operate under, since that would save them lots of time – and money! However it’s not really impossible to conform to specific state regulations; sure, it would take a bit more legwork, but it could be done, technically.

Besides, state regulations seem more appropriate when it comes to drone delivery systems. They are required to operate within a 10 mile radius from base, which makes them more of a ‘local’ issue. The rules that work for one place might not work for another, since cities vary greatly in size and layout, not to mention that some people are more receptive to high tech gadgets, whereas others are more wary of them. It’s great to have a nice, hot pizza delivered promptly at your doorstep, but it’s an entirely different issue to have a drone fitted with a camera hovering outside your children’s window!

FTC Cracks Down on Kickstarter Fraud

FTC Cracks Down on Kickstarter Fraud The FTC went after the makers of a board game who used crowd-funding money for personal expenses

Over the past years crowdfunding has really taken off, thanks to the internet’s power of bringing people together. Some of the most popular and well-known funding platforms are Kickstarter and Indiegogo; they offer people the ability to back creative projects of their own choosing, and follow their development process.

With over $1,770,000,000 spent by users like you and me in order to fund their favorite projects and almost 87,000 successfully completed projects on Kickstarter alone, crowdfunding seems like an exciting way to make fans feel like they are part of the creative team, all the while helping unlikely projects and products reach their funding goal – not to mention there are often cool goodies reserved for backers and people who chose to pre-order!

However, with a success rate of about 37%, it’s only normal to wonder what happens to the money that was donated to the projects that reached their goal, but didn’t make it in the end. Where does all that money go? Does it get refunded? Do fraudulent creators have anything to fear, apart from getting a bad reputation?

Well, it’s important to note that most crowdfunding platforms, like Kickstarter, do not offer refunds. All transactions take place between the backers and the project’s creator, which means that the backers should really look into each and every project very carefully, maybe even reach out to their creators, before making an educated guess on which ones seem like they are legitimate. Otherwise, they risk losing their money.

Since there is almost no protection against money-grabbing, fraudulent projects, crowdfunding would appear to be the perfect place for scam artists and con men, right? Well not anymore! In an unprecedented move, the Federal Trade Commission has taken legal action against a Kickstart project creator who failed to honor his promises!

The Lovecraftian horror board game that never came to life: The Doom that came to Atlantic City’

Untitled drawing

Artist Lee Moyer and game designer Keith Baker had been designing a new game board for more than 10 years! In ‘The Doom that came to Atlantic City’, players would be able to pick one of their favorite Lovecraftian universe horrors, and compete against the other players in taking over the world! In May 2012 they turned to Erik Chevalier’s company, ‘The Forking Path’, in order to bring it to life.

The game looked awesome, and Lovecraft horror fans all over the world were psyched! Who wouldn’t want to wreak chaos playing the Cthulhu piece? By the way, the little pieces would be sculpted by artist Paul Komoda, a fact that sweetened an already great deal even further! About 1,200 fans managed to collectively back the project with more than $120,000 – completely overtaking the funding goal of $35,000 that Erik Chevalier’s board game company had initially set!

Erik updated on the project’s status often and, for the next ~14 months, all seemed well. Suddenly, though, and without informing the game’s designers, he published this apologetic article, where he announced that the project was trashed and that his inexperience was to blame. He seemed willing to take the fall for this, however he never once explained exactly what had happened. Long-winded explanations of the vaguest variety were offered.

  • ‘Since then rifts have formed and every error compounded the growing frustration, causing only more issues’
  • ‘Unfortunately that wasn’t in the cards for a variety of reasons’
  • ‘I’ve spent a large amount of time pitching investors, begging banks for loans and seeking other sources of funding to fix this. Sadly I found no takers’

However, Erik never mentioned what those issues and his reasons were, exactly. At this point, you can see how his backers might feel a little duped, right?

The Federal Trade Commission’s take on crowdfunding

In comes the FTC, here to save the day! They investigated the issue, and discovered that, even though Erik claimed that the money was running out ‘after paying to form the company, for the miniature statues, moving back to Portland, getting software licenses and hiring artists to do things like rule book design and art conforming’, and that he would refund the project’s backers in time, none of the above was actually true.

According to FTC, he spent most of the money on personal expenses, such as rent, moving to another city, licenses and stuff he needed for another of his projects! Quite a bold move, that one.

So the FTC issued a settlement on June 11th, under which Erik Chevalier will never misrepresent the purpose of a crowdfunding project, its rewards, its progress or the qualifications of the people who are associated with it, and must honour any stated refund policy. He was also fined about $111,000 that he isn’t able to pay, which got suspended.

So what was the point of FTC’s involvement in all this?

What was the point of FTC’s involvement though, since, in the end, Erik Chevalier can still start crowdfunding projects, and is not obligated to refund the people he scammed with ‘The Doom that came to Atlantic City’?
Wouldn’t it have been more reasonable to ban him from participating in crowdfunding projects entirely in the future, or even force him to return the money he took? The truth is that they probably couldn’t do anything more about it, from a legal perspective, as they would be treading in very murky waters, as there are no rules and laws to regulate crowdfunding at the moment.
Well, even though there will always be unsuccessful projects, con artists and scammers, even though customer protection is important, we should keep in mind that crowdfunding is essentially based on basic human good will between the creators and the consumers, and that this relationship should remain unregulated and free, as therein lies the magic of the entire process!

But what about ‘The Doom that came to Atlantic City’?
The game’s designers will see it come to life, apparently, as it seems to have been taken up by Cryptozoic Entertainment! We’re glad, because this project deserved a second chance – and thankfully, it got it!

Twitter Ditches 140-Character

Twitter Ditches 140-Character DM Limit Starting next month

Users can send Direct Messages of up to 10,000 characters DMS: from 140 to 10,000 characters
There is a good news for all the twitter users, the message limit of 140 characters have been increased up to 10, 000 characters for all its direct messaging. The announcement has been made by the product designer at Twitter, Sachin Agarwal that from next month inwards all the Twitteratti, will be able to make direct message forgetting the character limit of 140. They now will have the freedom of making direct message as long as 10,000 characters. However, the relaxation in the character limit is valid only for direct messaging and other regular tweets, which were by default of 140-characters, shall keep working with the same number (140 characters).

New feature by Twitter
Earlier twitter introduced the feature of direct messaging, in which users were able to message anyone either they follow or don’t follow on twitter. Even group messages could be made private. Thus if you are talking to multiple people in a group or to any brand, then you can make that multiple conversation private and personal with each of the members. Even users having Android and IOS had this option of activating direct message receiving in the settings.

Level of Competition in the domain of social media
Twitter is in direct competition with other similar nature sites, like Facebook and Linkedin, and in order to remain in the game and in users mind, there is always a need to make certain changes. It is pertinent to mention that if Twitter doesn’t work towards its betterment, it may lose its standing in the eyes of the users.

Impact of change
Understanding the requirements of the target audience and the situation prevailing on other competitor sites, like Facebook and Linkedin, in which there is no limitation or constraints in the number of characters of the messages; twitter has announced this relaxation. According to one investor of the Twitter Chris Sacca, who write in one of the blog that Twitter is trying to bring in some improvement in their system, and increasing in the character limit of direct messages is one step towards improvement.

Introducing customization in DMS
Some people are thinking that by increasing the character limit in direct messaging, they may end up receiving DMS from all those individuals who they don’t follow on Twitter. But don’t worry; Twitter has a solution to this problem as well. With the help of special feature of sharing block list and exporting contacts, if your list of hatred/annoying people and that of your friend is the same, then you can even import the same list from your friends account to your account. This way you will be able to block number of people or contacts. As you will now have a liberty of making private messages and conversation with each individual in multiple contacts or group chats, in a similar manner, you will have the liberty of blocking entire list of people rather individual person.
So from July, enjoy new Twitter offer!

Apple unveils Mac OS X & El Capitan jabbing Windows

Just days before the overly-hyped Windows 10 release by Microsoft, Apple announced their latest OS version, which is called ‘El Capitan’, named after the famous rock formation in Yosemite National Park.

The new Mac OS X version features quite a few improvements over its predecessor, Yosemite, as Apple’s SVP of software engineering eagerly pointed out during the keynote at WWDC 2015, stating that OS X Yosemite had the fastest adoption rate of any PC operating system in history!

New Picture (1)

He also compared the adoption rate of iOS 8 and Android 5, with a chart demonstrating that Apple’s customers were gaining access to the latest versions of their favorite apps much more reliably than the vast majority of users, who still prefer using Android.

New Picture

In an attempt to draw attention to their products, Apple failed to state that stats (percentages) on the first chart were skewed by the sheer number of machines that run Windows! You could say that this particular jab at Windows completely missed the mark.

What is new in Mac OS X ‘El Capitan’?

Apple has implemented quite a few improvements in ‘El Capitan’, which will be offered as a free download to their users later this fall.

Improved Spotlight will give users cleaner results when performing a search, as it understands naturally-worded language in a more reliable and efficient manner. However, Apple didn’t implement iOS Siri (or any other kind of voice control for that matter) to Spotlight, which makes it somewhat inferior to Cortana.
Apple has implemented a major upgrade to the graphics of Mac OS X Yosemite, by integrating Metal into the update. The gpus of future Macs will be put to good use with this new feature. Apple claims that users will see an increase of ~50% in graphics rendering speed, compared to Yosemite. Sounds too good to be true!
More organized, cleaner windows for multitasking: in order to avoid cluttering and chaos, Apple has added a new feature that lets users swipe to make their open windows and apps appear neatly stacked on the desktop. They can also be moved to a new, clean version of the desktop.
In fact, users can now hold the full screen button of any apps or open windows in order to move them easily to a particular side of the screen, effectively creating a split-screen desktop in no time! Windows 10 offers this functionality, as well.

Beautiful Pinterest-style bookmarks in Safari, that help you access your favorites with much less hassle.
Improved Notes, that the user can draw on, add multimedia and maps to, and then share online.
Every Safari tab that produces sound is now marked with an appropriate symbol, just like Chrome has been doing for years. These tabs can also be muted independently of each other, or all at once.

Track-pad swiping in Mail, which means that most of the basic commands can now be given easily, by a simple swipe left or right. Will the users be able to edit their functionality according to preference? Time will tell.
Improved Maps, which feature public transportation information and guidance on how to get to your destination on time. The new Mac OS X ‘El Capitan’ ’s cursor will bloat up to many times its original size upon waking up, so that you can see it more easily.

What are our thoughts on Mac OS X ‘El Capitan’?

Well, to be honest this update doesn’t really seem very exciting at all, as it comes with a short list of new features aimed to improve the user’s’ experience, without doing much to improve the UI of the OS.
Compared to Yosemite, ‘El Capitan’ offers only a few new implementations, most of which aren’t really that original, since Apple seems intent on matching what the competition – namely Windows 10 – has to offer, but doesn’t show any interest in coming up with original, new ideas for their product.

Besides, the fact that Apple dictates device choice really bothers some users, who’d rather be more versatile and independent, even if that means their products get updates a little later than Apple users.

The lack of voice control and of a touch interface, for the lucky few among us who work on touchscreen monitors, renders Mac OS X ‘El Capitan’ inferior to Windows 10 -at least on paper!

What is the difference between 32-bit and 64-bit systems?

There is no doubt that most modern PCs have 64-bit architecture, since it is generally preferred over 32-bit. Ever since the launch of Windows 7, most of them also run 64-bit operating systems. But why? You might ask. Well, there are quite a few reasons why 64-bit systems are considered superior.

First and foremost, it is a matter of speed! A 32-bit processor can only effectively access (at most) 3.2 GB, even if 4GB of ram have been installed, which means that it will have trouble handling multiple tasks at once, or even some heavy, memory-consuming applications by themselves. However, 64-bit processors can access huge amounts of memory -more than your rig can probably support!

Besides, 32-bit processors can only move data around in chunks of 32 bits, whereas their 64x equivalents work with 64 bit-long words, which mean they are faster and more efficient at handling larger amounts of data.

To top it off, 64-bit Windows systems are backwards compatible, meaning they can run x86 (32-bit) programs with no problem, because their installation files are placed in a separate Program Files folder, which is named ‘Program Files (x86)’, by default. All of the programs in that folder are automatically treated as 32-bit applications by Windows 64. The opposite does not hold true, though, since 32-bit Windows cannot run x64 programs at all.

Fun Fact: If you are running a x64 OS, you can check which of your applications are currently running in 32-bit mode by accessing the Task Manager and checking the ‘Processes’ tab. The programs that are running in x86 are marked with *32 at the end!

Will Windows 10 be 64-bit or 32-bit?

Recently, the head of the Windows Insider Program, Gabriel Aul, announced that Microsoft Windows 10 will be available in both x64 and x86 versions. Many users were amazed at this revelation; considering all of the advantages 64-bit systems have, you would expect Microsoft to invest solely on Windows 10 x64. Why would they want to waste resources to develop Windows 10 for 32-bit configurations, especially since their sluggishness could potentially ‘gimp’ their product, making Windows 10 appear slow and unresponsive?

The answer is simple: there are still more than 70 million PCs in the world with 32-bit architecture – at least according to data provided by Windows Update. Add to that the users who don’t regularly update Windows, and those who don’t have an internet connection, and you’ll realise that those people, who could easily reach 100 million, are also Microsoft’s customers who are in need of support as well.

But why would they still be using 32-bit PCs? Well, there are many answers to that. Maybe these systems belong to corporations that run predominantly 32-bit applications. Many utilitarian, business-related programs run on 32-bit, since they don’t really require a lot of horsepower, and the business sector seems to follow the ‘if it isn’t broke, don’t fix it’ idiom! It would be unethical – and very unwise, as far as Public Relations are concerned, to leave all those customers hanging.

In the case of home users, as well as home-based entrepreneurs, 32-bit PCs can handle simple tasks just as well as x64 ones can. In fact, the only ones who seem to need x64 bit systems are gamers and digital artists, since graphic programs need a lot of horsepower in order to respond naturally – not to mention the vast amounts of memory needed for 3d rendering! So why would someone who is generally content with just browsing the internet, watching movies, listening to music and maybe writing on their blog need to upgrade to x64? They would have to upgrade their motherboard, the CPU (and its fan), their ram, and maybe their graphics card, for no practical reason at all. Not to mention the cost of the upgrade would be close to that of a brand new PC (purchasing of which would be advisable, in this case).

Besides, Microsoft seems pretty eager to push Windows 10 onto as many PCs as possible, as soon as possible, especially if you consider the fact that the upgrade to the new OS will be free of charge for a year after launch! So why would they purposely exclude such a large portion of the market?

 

Steam Machines Available for Pre-Order

You probably have heard that Steam was trying to make computer gaming a lot more accessible to the average game lover or the gamer by offering up smaller & more affordable hardware. Also, they declared their plans to develop a controller designed particularly with a computer gaming in mind. The great news is that the time has come & these are now readily available for pre-order. Now, you are able to pre-order a Steam Link streamer, Steam Controller, and 2 Linux-based gaming Computers at GameStop, Micromania, EB Games, and GAME UK. Best of all, you get your precious hardware almost a month prior to everyone else.

console

The Steam Controller is just a thing to behold. It is designed to emulate a mouse and keyboard with 2 big circular touchpads. This innovative input device enables you to play games from all genres, such as traditional gamepad-type games as well as games normally reserved for play with a keyboard and mouse. The Steam Controllers are wireless & completely configurable. The Controller retail is just for 49.99 USD.

 

2

The Steam Link enables existing Steam gamers to flourish the range of their current gaming setup through their home network. Simply connect your Steam Machine or Steam Computer to your home network, plug into a television & stream your games to the Link at 1080p. The Steam Link will also run you 49.99 USD.

 

Starting today, you will be able to pre-order your own Steam Machine. Pre-ordering your Steam Machine now means that you’ll receive it on 16 October, weeks just before the official retail launch on 10 November. As of now there’re only 2, but Valve assures to have more options later on. Syber and Alienware are two of the manufacturers that offer their own designs of the Steam Machine & the prices are not all that bad when compared to the selling price of entry for a console. Both starts at around 400 USD, but if you are really into the concept, you will be able to drop up to 1400 USD.
3The Alienware Steam Machine will cost 449 USD. The Steam operating system comes with a more efficient Steam Controller that runs with any console on the marketplace with a dual-core Intel Core i3 processor, 4GM RAM, and GeForce 860M+ graphics. These are just the beginning. It’s also got an HDMI slot so you can route your other HD gadgets through the Alienware box easily.

 

New Picture (3)The Syber Steam Machine features standard desktop GPU support NVIDIA GTX 980. It is more powerful and allows you to upgrade your graphics card in the future. But, it doesn’t come with a controller. The Syber Steam Machine will cost you another 499 USD.

Cloud Storage Options

Google Drive

Google Drive includes Docs, Presentations, Spreadsheets and more. If your daily work involves utilizing the mentioned web tools, Google Drive would be the best for storing your data. It is furthermore important to remember that it is versatile since it can be used with Android, Windows, OSX, Linux and iOS. Google Drive can be appropriate for many plugins which is why a lot of its customers prefer it. Its pricing can be very competitive.

Dropbox

Dropbox is among the oldest companies that came set up immediately cloud storage has been introduced on the market. Its recognition has continued to cultivate ever since then and several folks have liked the services provided by Dropbox. It supports os's of both desktop computer systems and mobiles, making it considerably more flexible.

SkyDrive

SkyDrive is incorporated with Windows, including Windows 10, and may also be utilized by people that use Microsoft Office. SkyDrive is backed by both desktop computers and tablets thus you may use it in your workplace or when on the road. In addition, SkyDrive also contains apps for iOS, Android and windows. Therefore, it is possible to access your documents on SkyDrive from wherever you're, provided you are linked to the internet

Intel’s Xeon E7 v3 New Line of Processors

Intel’s Xeon E7 v3 New Line of Processors

Intel had recently released its new set of processors known as the Xeon E7 v3 which is specifically aimed for businesses handling large amounts of data and which requires time-sensitive decision making processes and a high level of operational security and efficiency. According to Intel’s SVP and General Manager, Diane Bryant, the processors are aimed at faster and even real-time analytics giving rise to newer and more personalized services and better business efficiency. As can be deduced from the said purpose, the processors are aimed for large-scale businesses which handle tons of data at one time. The new set also boasts of up to 18 cores, making it definitely one of the fastest in the market.According to Intel, the Xeon E7 v3 set of processors are not only one of the fastest but they are also one of the most efficient in terms of cutting down costs. It has a lower power consumption due to its lower-leakage transistors which in turn, requires lower cooling needs. It also has a lower cost of hardware making it cost-efficient in terms of performance over materials. The set features one of the largest and most complicated processors ever developed measuring 663.5 square millimeters. It can support up to32-socket configurations as well, which is the largest in the industry.In terms of performance, the new set offers newer features that help enhance decision-making abilities.This includes the Intel Transactional Synchronization Extensions (Intel TSX) which allows for a much faster processing of business applications. It also has the latest Intel AES-NI for enhanced security and reliability. The processors also come with up to 18 cores and a last-level cache capacity of 45 MB which, according to Intel, provides up to 70% more analytics sessions compared to prior models. This is what would help large business to process tons of data in a short period of time especially when responding to clients quickly is a critical part of their services.The processors also feature memory mirroring, advanced machine error checking, the Intel Run Sure technology, and added architectural features which are aimed at improving mission critical tasks. It also features Hyper Threading technology with a capacity for 36 threads, scalable memory buffer of up to1536 GB of memory for each processor, speeds from 1.9 GHz up to 3.4 GHz, and two new dual channel memory controllers. The set features 12 new processors with four, eight, ten, twelve, fourteen, sixteen,and eighteen cores. Cache memory varies from 20, 25, 30, 35, 40, and 45 MB.The new Intel Xeon E7 v3 line of processors is aimed for large businesses for now with faster and more efficient data analytics as its major goal. It also has an almost prohibitive price of $1, 224 to $7,175 which come in sets of 1,000. So far, companies which are expected to purchase the new line includes Dell, Cisco, HP, Huawei, Bull, ZTE, Power Leader, Hitachi, Sugon, Super micro, and Fujitsu to name a few.The new processors are expected to gather accurate and real-time data from large businesses’ consumers which in turn are expected to help enhance service or even create new ones.

US Running Out of IPv4 Addresses?

US Running Out of IPv4 Addresses

The International Corporation for Assigned Names and Numbers or ICANN which manages the assigned internet protocols or IP’s available for use recently announced that IPv4 addresses may run out by early 2016. The average browser of the internet may not understand how this could affect their daily browsing, but larger entities such as those that provide internet services and data centers can be heavily affected not only in terms of the availability of their services but also of great financial costs. So, instead of the usual 32-bit system that is commonly used to indicate IP addresses, users may now have to get to used to the 128-bit system that is bound to replace it: the IPv6.

Background of the Shortage

Back in the 1980’s when the internet was first used by the US’s Department of Defense, the IPv4 was designed as a way to identify, specify the location or address of, and to route information to specific devices which are connected to the internet. It was designed to accommodate up to 4.7 billion internet addresses and was considered during that time to be a large enough number for all internet users. The internet however was released for public use in 1994 and with that came a large number of consumers for unique IP’s. During the following years, the introduction of various electronic gadgets that can connect to the internet have made that demand much bigger than expected with users coming up to billions.

In 2011, the ICANN released the last block of usable IP’s. This means that they have officially run out of available IPv4 addresses to provide for future users of the internet. Europe and Asia had been the first ones to totally exhaust their available IPs and US may be the last to experience this type of shortage together with Africa and Latin America.

The Solution

The IPv6 had been created back in 1998, way before the highly-anticipated shortage was foreseen to occur. Hence, there is really no need to panic whether the end of the internet is near. During that time, the Internet Engineering Task Force had already known that the IPv4 is not limitless and thus have started to create the solution even before the problem occurred. With IPv6, they featured other benefits which cannot be found in IPv4:

  • Can accommodate up to 340 trillion trillion trillion addresses
  • More efficient routing
  • Built-in security features
  • Easy management
  • More efficient multi-cast routing
  • Elimination of the need for Network Address Translation

This, however, also comes with several potential problems if not addressed early on: IPv6 uses a different type of packet compared to IPv4 which means that you may not be able to access some sites using IPv6 and switching to IPv6-capability would also put some considerable costs in the purchase of compatible hardware. If you are an average user, this may not sound to be a huge problem for you as the ICANN explained that current websites using IPv4 will still continue to be usable for many years. The problem however, lies on internet service providers such as those selling web hosting, domain name registration, and internet access as newer sites will soon be hosted in IPv6.