Sunday, December 20, 2015

USB Type C - It's Here to Stay

          Charging cables have been around since charging batteries has been a thing. There used to be the dark and scary days of Samsung fat-chargers, Samsung thick-chargers, Mini-USB, Micro-USB Type A, Micro-USB Type B, and Apple's 30-pin dock connector. Times were confusing. Thankfully, both the 30-pin Connector and Micro-USB Type B emerged as the victors in the charging port wars. However, Apple made a great leap forward with the Lightning connector. It was one of the first reversible connectors, and definitely slimmed down its size from its 30 pin predecessor. USB tried to beat out Lightning with Micro-USB 3.0, which offered faster file transfer speeds and faster charging, but alas, it failed due to its size being twice as large as Micro-USB 2.0, thought it was backwards compatible with the smaller, older 2.0 cable for a bonus.

          This still didn't solve the size issue, or the fact that the cable wasn't reversible. Which is where USB Type C has come into play.
          USB Type C is almost the inverse of Lightning. Looking at the image below, you could almost swear that Lightning- the middle cable- could fit into USB Type C - the leftmost connector. Unlike its Micro USB 3.0 predecessor, the USB Type C is smaller, faster data transfer, faster charging capabilities, and is reversible. Also unlike its predecessor, it is in more than just the Samsung Galaxy S5. It is already in the new Apple Macbook, the Nexus 6P, the Nexus 5X, the Chromebook Pixel, the OnePlus Two, the OnePlus X, the Nokia N1, and the Chromebook Pixel. However, these are devices just from 2015, and there will surely be 10 times as many devices. The fact that Apple adopted this standard in 2015 surprised many people, and is another but unlike Micro USB 3.0, USB Type C is here to stay.


-303 words

The Life of a Flashaholic

          I know. I can't help it. I've tried for 3 years, but it just won't go away. It all started when my mom and sister had problems with their Android phones. HTC no longer supplied updates for the phone, so it was stuck on Android 4.1. However, it was worse than that. Since carriers had to approve of Android updates, my carrier, I-Wireless, didn't ever approve of the 4.1 update EVER, so my mom and sister were stuck on Android 2.3.2. They faced numerous crashes daily, freezes, TERRIBLE battery life, and just overall many bugs, and unfortunately, there was nothing either of them could do, since they had no access to updates. So I spent the course of a week learning everything I could about Android, root, bootloaders, baseband versions, S-OFF partitions, hashes, bootloader versions, kernels, ROMs, recoveries, nandroid backups, and a lot of overall confusing things, in the hopes that I could get their phones running an up-to-date Cyanogenmod, an OS that I could install and run to remove all of their problems. I ran into a lot of problems along the way, and nearly gave up after Cyanogenmod would install, but then crash the moment I would try to launch an app. However, once I finally got it to work by flashing (installing) a million different things, and nearly crying at midnight with what I thought was a bricked (software-locked, unusable) phone, I finally got them to work, and felt accomplished. This feeling would forever-more carry on with me throughout my teenage years.
          I later got my own smartphone a year later. It was a Windows 8 smartphone, but I quickly found out that it too suffered from having to wait for updates from carriers to reach it. "Who designed this system?!" I thought to myself. I soon found out that there was a developer program where, if I applied as a developer, I could install the latest OS updates right as they came out...with the risk of minor bugs. However, I didn't care, I needed to have the coolest features, like Cortana or Windows 10 Mobile. That continued for a year, until I got my very first Android phone. The phone was awesome! It had a 50MP camera feature, and it ran pretty well...but then, I too discovered that there was a Beta program that allowed me to run the latest software. So, I did some modding, and started running the latest software. However, my battery life soon started to drop, and I also started to hate the aftermarket look of the OS...it had skins all over the icons, and everything just looked too kiddish for me, soooo....I decided to modify this phone as well, and install a custom ROM called GummyROM on it, which was basically a ported Cyanogenmod. Well, that was all dandy, until I got a new phone 6 months later, which came preloaded with a version of Cyanogenmod, known as CyanogenOS. So...whoopee! That meant I didn't have to do any flashing of ROMs, and I could just live with the phone and be happy....well, that lasted for maybe 5 months, before I realized that CyanogenOS was slow to receive the updates I was eager about, as it was an actual corporate product, not a MOD, and thus, had to go through testing and Google certification....blah! -_- So I installed custom ROMs on it, such as Ressurection Remix ROM, which was very stable, fast, and gave me great battery life and the latest features. However, eventually, I got tired of running ROMs, and my phone could not use my carrier's LTE, so I wanted to get a phone that would run the latest software, give me the latest features, and be a great performer, with LTE that would work with iWireless. That led me to the Nexus 6P.
          The Nexus 6P has great features and specs, but most importantly, it runs stock Android with no ugly skins, gets great battery life, and, since it is a Nexus sold by Google, it is the FIRST to get updates! It has been a match in heaven for me. However, after owning the phone for a week and a half, I have found myself still wanting to install ROMs, as although I am running the latest bleeding-edge software, I no longer have a million customization options anymore that I had with my old modded phones. So, even on the latest and greatest, as well as cleanest, software, I find myself still churning for more, as I happen to be a strong Flashaholic.

-755 words

Microsoft: From Windows to Services

           Windows 10. It ushered in a whole new era to the PC. It fixed the seemingly unlimited complaints that came from Windows 8 and Windows 8.1 days, where former CEO Steve Ballmer had attempted to steer the company into a 100% unified ecosystem by making all operating systems exactly the same, between desktop, tablet, and (almost) phone. However, this turned out to be a huge flop, as it was not properly optimized for mouse and keyboard usage, and it took away the nostalgic Start menu that had been in use since the turn of the PC era in the early 1990s. However, with the inauguration of the new CEO Satya Nadela, Microsoft began its new journey down the road of a new mission. Nadela decided to turn Microsoft into a services company; no longer would Microsoft turn the majority of its profits from Windows, but instead, would give away the new version of Windows, Windows 10, for free for one year, and would instead make money off of monthly licenses for products such as its cloud Azure program for business, Office subscriptions, OneDrive subscriptions. and later additional Windows 10 licenses.
           While many believed that this strategy would fail, it has actually turned quite a success for the company. Office is bringing in more revenue than ever, and better yet, Microsoft is almost guaranteed a certain monthly revenue, instead of the shady releases of Office every 2-5 years that may or may not sell a million copies, or maybe 200 million copies. The numbers were never for sure, and company profitability would greatly fluctuate year to year, making the success of the company very hard to determine from stock brokers' perspective. Almost everything in our lives is turning into monthly payments, from cell phone plans, Office subscriptions, electricity, mortgages, cars, Spotify, Amazon Prime, etc. Everything is turning into monthly subscriptions, so I was not surprised with this transition to giving away Windows 10 for free for a year in an effort to claim more consumers for their other products, almost like a subsidized trial that gives you the basic foundation to work with, but then lures you in with temptations of productivity and efficient cloud storage! AHHHHH! Its a trap! Just kidding. It really is a quite intelligent way of going about business, and so far, Windows 10 has been a large success. It has taken over 9.96% of the Windows marketshare (which Windows claims roughly 90% of the total operating system marketshare, so its 10% of 90%, or 9% of the global OSes, for people that like to do math), and its marketshare and reach is still spreading. While it may not sound very exciting to have 10% of Windows users updating, that just happens to be 75 MILLION upgrades in just 5 months- that's roughly half a million people clicking the download button a day! So while many may question whether Microsoft should have given away Windows, I believe it was a smart choice, because it will, in the end, ensure that customer's both 1) feel happy for getting a free gift (a Windows upgrade), 2) will ensure that consumers are up to date and protected from malicious attacks, and 3) ensure that Microsoft can better control and market its products to its Windows 10 users.

-547 words

Sunday, December 13, 2015

3D Printers: Anything is possible!

          3D printers. They can almost do anything! I could make almost anything, from air funnels for more effective PC cooling, to true manufacturing demos and prototypes, it seems like the possibilities are endless. However, 3D printers are slowly starting to work their way into the medical industry as well, not just in geeky manufacturing plants. At the University of Iowa, a cat by the name of Vincent was found to be missing rear tibias, so he could not walk. An avid cat lover myself, I would be so sad to see such an animal suffer like that. However, the folks over at Iowa State University managed to construct 3D legs that would attach to his current bone structure that allow him to walk. Just take a look for yourself, and be amazed at how far printing technology has come!

-139 words

Mobile Payments=Messy!

          Cash. Change. Its a pain to carry around. It takes up additional space, and metal coins tie your pockets down with weight. Then came checks: sure, lets just carry along a pen and a large checkbook, make sure there is a large enough balance in your checking account, and boom, people could spend a solid minute or two writing out a check for $35 in groceries. But that wasn't faster than cash, and it only reduced the weight of coins with the risk of bouncing checks and the extra size a checkbook would take up. So we still needed a new solution. Ah-ha! The credit card! Sure, lets just swipe a piece of plastic with a magnetized strip, and we will no longer need to bring along cash! Well, sure, that worked, until....well, you couldn't pay a friend quickly for some help he gave you with your computer, or the neighbor boy for helping out with some quick yard work. So the payment methods continued to live alongside each other. But heck, why did we need cheap pieces of plastic sticking in our pockets? Weren't we more advanced than printed pieces of paper and plastic cards? Sounds a lot like playing fake shop as a kid with the fake dollar bills and sometimes, the fake, thick plastic credit cards.

But then, came the smartphone era, and by 2011, companies such as Google and Paypal realized that, if we could purchase things using our smartphones online, such as while browsing Amazon.com or Ebay.com, why couldn't we transfer that ability to pay with our phones to the physical stores as well? And that, my friends, is how the idea of mobile phone payments, such as Android Pay, Google Wallet, Apple Pay, Samsung Pay, Walmart Pay, PayPal Mobile, etc., started to take their infancy.
          In 2011, both Google and Paypal dropped the bombs with Google Wallet and Paypal Mobile, respectively. Both were designed to run on the Android ecosystem, and made contactless, or tap to pay, payments possible through a low-powered wireless standard called NFC, or Near Field Communication, the name itself implying that it is both extremely short range (within 8mm) and low power-consuming.  Large stores such as McDonald's immediately took charge and replaced all of their old credit card scanners with newer, shinier ones that could both scan credit cards, but also accept NFC payments. However, the technology didn't ever take off as Google and PayPal had planned. It was quite unfortunate, however, as the technology seemed years ahead of its time: hell, the iPhone 4 had just been released a little more than 7 months prior, and that phone still was connecting through 3G data. Meanwhile, here was Google, pioneering the way with wireless payments made on your phone, putting the time era into place. Several years later, and by several years, I mean 3.5 years later, after Apple released the iPhone 4s, the iPhone 5, the iPhone 6, and finally, the iPhone 6, Apple decided it was their time to take a crack at wireless payments, and released Apple Pay. Sure, it was almost 4 years behind, and it didn't work with some of the largest credit card companies, Discover Card included (which is the 4th largest credit card company in the world), but it did assist in marketing the product better than Google and made partnerships to make sure that tap-to-pay kiosks were more widespread, such as in Starbucks. So three wireless payments? I can handle that...but this is where things start to get messy.
          Even though Google Wallet existed for Android, Samsung felt that Google wasn't doing enough to push the service, and thus, decided to release Samsung Pay, in direct competition with Apple, and featured its own marketing campaign. However, then Google rolled out its revised Google Wallet, dubbed Android Pay, which was built into the operating system as of Android 6.0, and had full compatibility with their integrated fingerprint service. So now we have reached 5 services, or really 4.5, as Google Wallet still ran on older devices not updated to the latest software....hmm. But it is now that Walmart has created their own app as of this week, dubbed Walmart Pay, which is rolling out in stores slowly over the course of the next year. As the name implies, this wireless payment system only works with Walmart, but allows you to better manage receipts and returns. However, it made me wonder and makes me ask, why do we need 5.5, or 6, or otherwise, more than 2 or 3 mobile payment systems?! The fragmentation is getting outrageous- I do not want to be required to have Walmart Pay, Target Pay, Android Pay, American Eagle Pay, Casey's Pay, etcetera. I see what Walmart was trying to do with the receipts management system included, but they are just adding on to the already polluted system. I also see what Samsung was trying to do, as Apple and Samsung are highly competitive companies, and Samsung needed some catchy advertising to stay relevant, but now Samsung should try to work with Google to convert people to Android Pay at this point and end their service I believe and end this terribly messy fragmentation that is the mobile payments system!

-880 words

Thursday, December 10, 2015

Dear Mr. Lightbulb

Dear Mr. Lightbulb,
           Thank you for everything you do. Centuries ago, when the sun would make its spectacular presentation of setting, the world would come to a halt. Like the plants must stop photosynthesis at nightfall, us humans also were forced to put everything we were working at on hold, at mercy of the bright sun. While days in the Summer near the equator would be long and plentiful in light, Winters north or south of this mark would present utter amounts of darkness, with my homeland of Iowa struggling with only 9.5 hours of light per day in December. Not to mention, you have made it possible to travel through the dark hours of night and fog, your light leading the way like Rudolph’s glowing red nose. Rest assured that, while many have forgotten your presence in everyday life, I remember you, and I won’t forget you. I’ve watched you grow up over the years alongside me, from your young, immature incandescent days, to your maturing teenage CFL years. Look at you now though, in the prime of your adult years, sprouting into such a strong LED bulb.
          Now, I know you have done terrible things in your life. You have helped create the factory system, enslaving millions of children at the turn of the 20th century for nickels on the dollar, 24 hours around the clock. You have wasted billions of gallons in fossil fuels to power your previously inefficient ways of bringing light to the homes and families that were dependent on you. You would be dropped and would explode like a bomb, with glass shards and metal filament splintering into the feet of both the innocent young or the elderly old looking to simply lighten up a dark hallway.
           But where you have enslaved millions of our children, burned through a multitude of our limited fossil fuels, and stabbed and punctured your very creators and gods, you have always tried to brighten our lives, and that is an unreputable fact. The day Mr. Edison revised your filament was the day that you started feeling needed, and we need you. Without you, our world is dark, our world is full of dark corners filled with scary monsters, and our attics remain black caves for only the cats and bats to roam. So for every time you have done one thing wrong, you have committed to doing ten rights, and for all of the illuminating you for us that is commonly under appreciated, I thank you.

Stay bright,
Tyler

-422 words

Sunday, December 6, 2015

Why the iPad Pro is Not So Pro

          The iPad. While revolutionary in a time when Android tablets were just running a blown up phone OS (Android at the time was only programmed for small, portrait mode smartphones. Companies such as Asus, HP, and HTC recognized the need for tablets, and thus, modified the crap out of Android 2.0 to make it run in landscape and portrait, but all it did was create a buggy user interface and a stretched out experience.), the iPad is no longer holding such an innovative title in my eyes. When Apple announced the iPad Pro, I just laughed, It is merely a 4 year late Microsoft Surface Pro with a stylus that stupidly recharges through the iPad itself, ready to break off, and features the discontinuity of not featuring Apple's 3D Touch. Normally, Apple would unify an experience on devices, such as transfering Retina Displays or Touch ID. If someone knows how to use an iPhone, someone knows how to use an iPad, an iPod, and an iPad Pro. However, if someone now buys an iPhone 6s, and starts using 3D Touch to interact with everything in the UI, when they pick up, they would learn fast their new fancy device does not necessarily do everything their phone does, and this creates discontinuities, something unlike Apple. Not only are there discontinuites, but Apple has completely copied Microsoft's screen size, high resolution, and pen idea, slapping revolutionary on the front, but you know why I hate the iPad Pro so much? Because without Windows 10 running on it, it is just an overglorified paperweight in my eyes, as apps are never powerful enough. PROfessionals need real software, not App Store apps and Angry Birds blown up on an oversized iPad. And that, is why the iPad Pro will never be a real Pro version, as no professional in their right mind would rely solely on App Store apps.

-316 words

Custom Computer Cases

While most people would consider computers to be colorless and boring, with blan HP and Dell logos branded on the front, and commonly featuring black chassis, flat sides, and a slightly arced, glossy black front.

However, these stereotypical cases are just blan OEM (Original Equipment Manufacturer) cases that ship for millions of PCs. and have customization as high as Death Valley. So if cases from OEMs are very boring in nature, what is this blog post about? That, my friend, is where we wander into the world of custom cases.
Custom computer cases are from manufacturers tha usually sell computer parts on their own for computer building enthusiasts looking to actually assemble their own system. Cases can range from $20 to $1000, as crazy as that may seem. However, cases are usually priced very fairly based on what you get for your money, and cases can range from any shape to any size. Some are a mere popcan-and-a-half tall, while many can be easily waist height (for those shorter people out there, they can get almost as tall as 3 ft!). There are about 4 different sizes of consumer cases (there are also enterprise cases, which are basically U1 and U2, that can be used as server cases to fit lots of hard drives into the thinnest, stackable case possible). In order of smallest to largest respectively, there are Mini-ITX, Mid Tower, Full Tower, and then Ultra. On a technology site such as Newegg.com or consumer site such as Amazon.com, it is very easy to find more than a thousand cases that are Mini-ITX, Mid Tower, or Full Tower. However, finding an Ultra tower on these sites would be near impossible, as Ultra towers are something that is sold only on individual websites from really small companies that build systems that are sometimes as large as 4 feet by 4 feet by 1.5 feet. These massive systems take completely custom shipping approaches and weight in from the great amounts of metal put in, so expectedly, they are VERY expensive, but with them, you could be declared as the ultimate bad-ass!
However, for most of the PC crowd, the first three sizes would work fine, and thus, come the decision between the three. At first, I started with the largest Mid Tower possible, as it was actually almost the same size as a Full Tower, but half the price, at about $65.
It was an amazing case, with room for a large radiator in the top for water cooling, and plenty of red fans to keep everything cool. However, over time, this case was just too large for me to move around to go to my dad’s house, so after 3 years, I changed things up and went Mini-ITX.
I went with a $45 case that was about 5 times smaller in volume than my previous case, and is actually just a cube that is 2 pop cans x 2 pop cans x2 pop cans in size, as wacky as that dimension sounds. Inside, it packed a large front fan (200mm) and options for mini fans (80mm) in the rear. Overall, I’m very satisfied with the small size, as it is very maneuverable and light, however, it does sacrifice the ability to cool my computer as well, but overall, the thermals are very good. If you ever decide to purchase a computer case to spice things up a little bit, be sure to check that the case you order will:
  1. Fit and house all of your components properly, especially your motherboard, graphics card, and CPU heatsink
  2. Make sure there are an appropriate number of slots in the rear to house all expansions cards coming off of the PCI and PCI-Express slots.
  3. Make sure that the number of fans in the case will be adequate to cool, or that there are extra spaces left for you to add fans should cooling be inadequate.

-654 words

Thursday, December 3, 2015

What is root?

Root? What is it? A root is both a structural and nutritional supplier to vegetation that….NO! Not a plant root silly, Android root! So what is Android root, you might be wondering? Well, lets start at the basics. Android root is similar to having administrator access on a Windows computer. Doing certain things on Windows requires elevated privileges, or administrator rights, to make certain system-wide changes, such as installing certain programs, deleting certain system files, and whatnot. We’ve all probably seen that pop up that says,”enter the administrator password to proceed”, especially anyone that has used school computers. Anyway, android has a very similar process built in….but not at the same level. Android’s “administrator” access is named,”root” access, as like a plant, root is the very bottom-level of the operating system. However, the main differentiating factor here is that almost all android phones have no access to root by default. The user only has limited access to their own files and installing applications from the Google Play store, but modifying the system for audio equalizers, making modifications to the OS for battery saving, or secretly saving Snapchats, root can do almost anything that a user would normally think would be impossible. Its disabled by default in the OS to protect the user from accidentally installing malware that could break a person’s phone and cause it not to boot, which makes sense, but it can be enabled through a few simple hacking methods. Personally, I used to use it to save Snapchats secretly AND without the time stamp in the corner, and also be able to have an unlimited caption length, but now I just use it for a mod named “Viper 4 Android” that allows me to have amazing sound audio equalizers, but again, there are thousands of uses. It is very similar to jailbreaking an iPhone, if you've ever heard that term coined, but it allows even more possiblities, and if you're willing to sometimes risk your warranty, it can open many cool and functional doorways for experienced users.

-344 words

Sunday, November 22, 2015

Why 4K Is Overhyped For The Wrong Reasons

           4K. Yeah, those "expensive" (but coming down in price) televisions in Best Buy that look so damn good. You look at one and immediately notice the colors are rich, saturation is perfect, glare is minimal, the focus and image is crisp, and they're commonly so thin and premium looking. But they're overhyped, and for all the wrong reasons.
           Lets start with what 4K is. Its a display with a resolution of 3840 pixels × 2160 lines, adding up to a pretty large 8.3 million pixels. Each of these pixels must be updated each time the image changes, which takes stronger horsepower, or Graphics Prossessor Unit, to push all of these 8.3 Megapixels. So immediately, there are two more costly components to manufacture: a higher resolution panel, and a faster GPU. Alright, so these are the only differences between 4K T.V.s and 1080p ones? No, and that is where the blur between what truely makes 4K great, and what many people look over.
          The colors. They look so vibrant, so rich, so lucious. Watching the gazelle dash across a rich golden meadow has never looked nearly as good as it does when you look at it on that Sony 4K television. You take a gandering look over at the 1080p T.V. to the left and are immediately struck by how dull the colors look. Everything has a shade of white to it, and there is even a slight glimmer of the bright flourescent lights flashing on the screen, with no such case occuring on that fancy 4K T.V. What's the deal? Well, as I explained earlier, 4K just refers to the number of pixels on the screen, and has nothing to do with colors at all. So why are the colors better? Well, for two reasons mainly. When you buy a 4K T.V., the main reason it costs so much is because of three main reasons: you're paying for a new technology, so the cost upfront to manufacture runs higher. It also costs more for the increased resolution and processing unit. However, what makes those colors so good? Well, you know all of that extra cost you'd be paying? Well, a large part of that goes into a plastic front panel. On top of every display panel (the panel that contains the pixels), a plastic panel sits on top, but unfortunately, this panel, which is quite cheaply produced to keep costs down in 1080p T.V.s, can cause glare and a loss of color accuracy. However, since you're paying sooo much of a premium for a 4K T.V., manufacturers such as Sony and Samsung can afford to put a very expensive plastic piece inbetween you and the display panel, and this minimizes nearly all glare and color loss. So next time you are looking at 4K T.V.s, you, the educated reader, will know that its not the pixels that make it pretty looking, but that there is a nice, glare-free, expensive piece of plastic making everything gleam with bright colors.

-509 words

Everything You Write Is Plagiarism

         You, stop! Stop plagiarising. Everything you write is plagiarism. Everything you speak is copying. Everything you have thought has been put in exact words. Even your teachers are plagiarising every minute of every day. All your class mates are plagiarising too. Because everything you, your classmates, your teacher, your parents write, is plagiarism. How, one might ask? Well, lets review the plagiarism definition:
"The practice of taking someone else's work or ideas and passing it off as one's own." Not only are you plagiarising, but I'm plagiarising this entire blog post. Take a look for yourself:
          See, right there! In the bold text! I totally just copied and pasted the beginning of this blog post from...wait...what is all that gibberish?? Well, I welcome you, reader, to the Library of Babel.
          The Library of Babel. It orginated as a concept from the book named no other than,"The Library of Babel," published in the year 2000. In the book, a magical library exists that has not only holds past knowledge in the universe, but has every single thing that has and will ever be said. Cool concept, sure, too bad it doesn't exist....but it does. Yeah, I know what your thinking....because its technically already in the Library of Babel. A genius by the name of Jonathan Basile wrote a computer algorithm to randomly generate trillions upon trillions of pages of a computerized library containing every single 3,200 character essay, paper, paragraph, whatever you name it, that will be ever created. While it is true that Basile did not create every single piece of writing that his computer algorithm did, it is true that everything you or I write has already been written, and we take credit for what we have written, even though it has already been written by someone else, or in this case, by SOMETHING else, a computer. But created by computer or human, it has been created, and we are taking credit for something that has already been created, and that, in my book, makes everything that humanity will ever write in the future plagiarism.

Sources: http://futurism.com/meet-the-digital-library-of-babel-a-complete-combination-of-every-possible-combination-of-letters-ever/

-352 words

Streaming Apps?

          For quite a while now, streaming has been a pretty big thing. Streaming has been around since the dawn of YouTube at the beginning of the 21 century, the streaming of Netflix or Amazon Prime movies/T.V shows, or streaming music such as Pandora, Spotify, Apple Music, Google Play Music, Soundcloud....yeah, the list of companies that allow streaming of digital content is almost endless! Almost any consumable medium can be streamed...except apps.
          Apps, since the dawn of the Apple App store in 2008, have required the user to go to the App Store of choice, click download, sometimes prompting for a password, enter the password, wait for the app to download and then install, and then open the app to finally do what the user wanted to do. Doesn't sound like that much work, but its a hassle for some. If I go to facebook.com on my phone, I want it to seemlessly open an interface that I can use Facebook on, not prompt me to,"Click here to install our mobile app, then, after its downloaded 40MB a few minutes later, you can open it, then enter your sign in credentials, and then finally, if you can remember what you were doing, go do it!". Hence, we have mobile sites (such as m.facebook.com), but these mobile sites, while vastly improved over the dawn of the mobile internet era, are still not as great as their app equivalents due to web browser limitations. There has long been the need for a middle ground between a crippled web user interface, and the need to download apps that may only needed for a one-time use scenario, such as if I were to book a hotel, but not feel obligated to download Travelocity's app just for one night booking of a room. This is where Google's new innovation steps in: streaming apps.
          Streaming apps would be made possible by Google's remote cloud servers running layers of virtualization to basically run the app in the cloud, and then send the data to the user. However, this method of delivery does require a strong WiFi connection, as the responsiveness of the app is heavily dependent on the data transfer speeds, but also low latency, something that is hard to acheive on a mobile network in all areas. If a fast network weren't available, the streaming app concept would crumble, as there would be too much delay between a user's touch action and the animations and transitions of the app.
          Overall, I love the concept, and I hope Google implements it within a lot of apps soon. While I think it makes no sense to stream apps such as games quite yet, though Nvidia has commited to the idea with its Shield console and streaming games, I believe that streaming small apps that will only be used once or twice in a long while is more worthwhile, such as Hotwire, for spontaneous hotel bookings. I look forward to this project moving forward in the future: who knows, maybe one day, every app will be streamed!

-515 words

Sunday, November 15, 2015

Tesla: The Car Is LEARNING!

          A few weeks ago, I talked about TeslaOS 7.0, which released a beta feature that allowed the car to self-drive itself. Once the user is, say, driving down a road at 55mph, the user can click the "Cruise" control button and take their hands completely off of the wheel. The car will stay in its current lane and continue the same speed at the same time. If a deer pops out infront of the driver, the car will auto-brake, and if a car is driving slowly upfront, the car will adjust its speed. It takes cruise control to a whole new automated level. If on the interstate, and the driver turns on the left turn signal, the car will automatically switch lanes to the left when deemed safe, and then continue in that lane at that speed while the driver relaxes, sipping away at their coffee and chatting on the phone as if their car driving itself was no big deal.
         However, there is a caveat to the whole experience. While Telsa's engineers have spend years on this project, there are still certain risks, and since self-driving is still, Tesla's CEO Elon Musk pulled a smart liability claim by making the statement that,"We’re advising drivers to keep their hands on the wheel. The software — it’s very new,” he said. But he added: “In the long term, people won’t need hands on the wheel.” Backing this up, Musk announced that the car had self-learning capabilities programmed in, a smart artificial intelligence that, over the course of hundreds of millions of miles, would learn from driving habits, and, like a real teenage driver, learn from its mistakes, and correct them. Musk stated,"When one car learns something, the whole fleet learns something. The network uploads “data to the central server, where it can be collected, do system analysis, and then feed that back into the cars. That’s the next level — and far beyond where other car companies are. Any car company that doesn’t do this will not be able to have an autonomous driving system,” Musk also said that updates will happen regularly and the “car should improve each week…you’ll probably notice difference after a week or a few weeks.” Sure enough, users flocking to this Reddit post have reported that they have witnessed this improvement. One man stated that his car used to take one turn near his house way too fast, and would have to hit the breaks mid-turn to slow down. However, over the next two weeks, he noticed that the car was changing, and eventually, it slowed down far before the 90 degree turn, and now takes it like any other safe human driver would. The Skynet of a system is working, and with over a million miles being logged a day, Tesla and its engineers have done a great job. One team cannot program a car how to perfectly drive, but, like a human, if they program an ability to learn from mistakes, well, practice makes perfect, and the car is practicing, and its learning! Yeah, I wouldn't imagine saying that growing up without thinking full Transformers.

-522 words

Smartwatches: Useful or Gimmick?

          Smartwatches have been in the buzz lately. Apple released its own Apple Watch almost a year ago, and other manufacturers have been rolling them off the shelves for quite a while, such as the circular Moto 360 X2, or the square LG G Watch dating all the way back to mid-2011. However, they seem quite controversial in both concept and usage. Smartwatches are either seen as overpriced, a gimmick, or a fad of the dying watch, due to phones with time being everywhere. However, since 2011 with the release of the LG G Watch, they have started working their way into peoples’ lives, and for the reason that they not only tell the time, but offer better map directions, sport tracking, and call/text/app notifications right on my wrist. While most activities will alert me with a notification, still requiring me to pull my phone out, many activities can be responded to with voice, or even better, I can just swipe them away, without pulling my phone out. So while this is just a digital accessory to my phone, I believe its most valuable function is to pull me away from my phone. So, I went out and bought myself a refurbished LG G Watch on the cheap, and decided to give it a go, to see if this tech hype was worth dishing out money for or not, and the following will be my experience with Android Wear.
          The package finally came. If my milling around checking my Fedex tracking page every 5 minutes during school could be summed up by a song title, it would be "Centuries" by Fallout Boy. When did the package arrive, you might ask? Oh, just the morning AFTER I leave for the entire weekend for Chicago. So when I get home from vacation, I eagerly crack the box, slice the seals, and open up my gem to find a beautiful metal and glass watch staring me in the face. The watch has no buttons at all, as it strives for elegance, and there is no need for buttons, as the unit is driven and awaken by either a tilt of the wrist or a tap on its screen. I give it a quick tap, and sure enough....nothing. Time to hit the charging station. Included was a nice and strong magnetic charging station, and when any metal, including the watch body, gets close, BAM, it gets sucked in like planet being sucked into a black hole. It took about a full hour to fully charge the watch from its dead state, but I played with it a little while it rejuvenated. However, playing I didn't do much of, as four system updates, one after another, were waiting for me out of the box, so I spent my time clicking,"Update", "Yes.","Update", "Yes.","Update", "Yes.","Update", "Yes.", and by the time the watch was fully charged, I had an updated watch with the latest firmware, so despite the age of the watch (though the specs ARE up to date!), rest assured, the software experience would for sure be the same as any of the latest smartwatches.
           After the charging cycle completed, I threw the watch on, and to my surprise, it felt suprisingly light and thin (it is thinner than the first generation Moto 360 by 1.5mm), especially when compared to my previous Nike+ GPS Watch for running. The band felt so premium, elastic, and comfortable for being rubber, but LG killed it here. KILLED. IT. I'm not going to wear something that kills my wrist or weighs me down like dumbells strapped to my wrist. I was even going to dish out $11 for a solid black metal band for it, since the watch offers interchangeable bands (that DON'T cost $50-$100. Yes, I'm looking at you Apple -_-). However, I found this band so comfortable, I wouldn't go metal just for the sake of keeping this good, flexible feeling.
           As for learning the insides and outs of the watch, it didn't take too long to master it. A built-in tutorial taught the user the basic functions of dismissing cards, or notifications, how to answer calls (though while one can answer a call on the watch, its a one-way conversation until you pull your phone out of your pocket, as the watch has no speaker. However, its useful to swipe to answer and yell,"Hang on mum, I'm busy purging, gimme a sec!"), and how to use voice commands on the watch. Overall, simple tutorial, but getting the most out of Android Wear's true functionality and rhythm would take practice.
           At first, I expected too much of Android Wear. I downloaded apps, such as a web browser, and a full keyboard to try to seceretly text in class on it without using voice commands. While possible, its a complete waste of time. A 1.65" screen was never for this, and thus, I COULD type a text, or browse web pages, but WHY?! I eventually uninstalled these gimmicks, though the programmers were genius' and programmed well, they have little use in real life. Continuing on, I learned receiving notifcations for texts were amazingly useful, as I usually forget to check my phone, but a vibrate on my wrist is very hard for me to ignore, and also, it allowed me to read texts in class and swipe them away without becoming distracted, replying, and getting pulled into useless conversation. However, notifications such as Snapchat were a little less useful. While great to know I have a Snapchat, I was dissappointed to find I could not view the snaps on my watch. While I know the watch is a small screened surface that could be easily photographed, I ask why? Just being able to see my snapchat conversations would nice sometimes, without pulling my phone completely out. Other apps that benefit from wear were AppyGeek, sending me a daily technology news article to digest in class to spice my blan day up a bit, and Google Fit, tracking my daily steps. I also have fun changing the watch face up on a now weekly basis, though at first it was daily. Its nice to have this feature, as its like owning a new watch every week, and it adds a high level of customization, with thousands of faces in the Play Store.
         But after using this watch for two weeks, being excited, then disappointed, and then impressed, where do I stand now?  Well, while most activities will alert me with a notification, still requiring me to pull my phone out, many activities can be responded to with voice, or even better, I can just swipe them away, without pulling my phone out. So while this is just a digital accessory to my phone, I believe its most valuable function is to pull me away from my phone. No longer will I have to check my phone for phantom vibrates that I wasn’t sure about, or I can save time by swiping away useless texts in class without pulling my phone out. Its a time saver, and I believe this one gives a point to using technology smarter instead of becoming addicted to it, even though the idea behind it sounds like the opposite. Using a crippled piece of technology to get basic tasks done pulls us away from the temping world that is our smartphone on the internet.

-1235 words

Saturday, November 14, 2015

Offline Google Maps

          Google usually an innovator, but its taken them many, many years to reach this day: fully offline maps. While not revolutionary to some, I think this is great. Personally, I have a cheapo carrier, and out in the middle of nowhere, I have little to no data service, so being able to offline my maps means that, if, for some reason, I lose my data and go off route, I will be uneffected, as my device at least knows what road I'm on, contrary to previously when it only downoads my destination region, my departure region, and the roads I plan on driving on. Competetors such as HERE Maps have had this feature for years, so while I think it was long-time overdue, I'm glad Google finally jumped on the bandwagon, though its only for Android (with iOS coming later in the year). Now, if only Apple Maps would add this functionality...(just kidding, they'll always be terrible!)

-158 words

Sunday, November 8, 2015

Cortana for iOS Beta: Just Stop Microsoft

          During the year I owned a Windows Phone, I saw my phone transform through software updates. Not only did the new software finally allow wallpapers, folders, and Swype keyboard, but the most anticipated feature arrived on the 15th of April for the willing beta testers: Cortana.
          Cortana was quite amazing. Her voice was so authentic, so real sounding, but unlike Siri, she actually felt like an assistant to get things done. One could simply say,"Hey Cortana!" and a dinging chime would sound, signalling she was listening for questions and commands. Dubbed the "first personal digital assistant", her main competitiveness in the voice recognition marketplace came in that she was more personal, able to tell jokes and be productive at the same time with a smooth synthesized voice.
          Some of her best features included location reminders (say,"Remind me when I leave Walmart where I parked", or "Remind me next time I'm at school to return my library book."), telling cheesy jokes (no explanation needed), reading text messages aloud, sending texts, calling people, and most importantly and impressively, using APIs. APIs, or Application Program Interfaces, allows one app to interact with another. Using APIs, Cortana is able to interact with apps outside of Microsoft's home ecosystem, so that the user can tell Cortana,"Play episode 3 season 10 of South Park on Netflix," or,"Play my Watch Later on YouTube," or,"Play Radioactive on Spotify," and Cortana can open each individual app, search for the content from that app, and play it, all without the user having to click a button.
          This is where Cortana and Google Now have always beaten out Siri at its game. iOS and Apple have always struggled to accept applications being able to extend their reaches outside their own virtual sandboxes, and thus, Siri has never had much interaction with the user and its apps outside of Apple's built in apps. For example, I could tell Siri to set a 5 am alarm, and it would create an alarm for 5 am using the BUILT IN clock app. However, if I tell it to play the latest PewDiePie video on YouTube, it will simply tell me its confused, or Google "Latest PewdiePie video on YouTube", creating frustration with the user. This is why Cortana earned its name as a personal assistant. However, recently Microsoft has created a beta app of Cortana for Android, but interestingly enough, has now also created a beta app for iOS as of this week.
          The idea behind the release of the voice client on Android and iOS is to convince these users that Cortana is amazing by giving them a teasing taste, and convince them to switch mobile platforms to Windows Mobile, now currently Windows 10.
          While the idea sounds great on paper, I consider it a waste of resources, time, and money. It's not convincing anyone to switch. As bold as that statement may sound, I believe it to be true. On both Android and iOS, the app is crippled beyond belief from what I had on my old Windows Phone. As I explained earlier, Cortana is so useful on Windows Mobile due to the incorporation of APIs on the OS, allowing it to interact with apps such as Spotify or Netflix directly. However, once Cortana reached 3rd party platforms, it no longer retains control over APIs and other applications. Without such integration, Cortana can only place calls, tell jokes, and Bing things for the most part. So while I applaud Microsoft’s efforts at expansion and unifying platforms, I think they should save their time, as it does nothing but misrepresent how useful Cortana is.

-606 words

The Most Powerful Viruses

Viruses. They affect not only humans, but also computers in the forms of worms, trojans, and malware. They come from visiting shady websites, downloading files, or from a compromise in the network. There have been millions of these viruses written, however, this is the list of the top three viruses to ever hit the mainstream, starting least powerful to most powerful and effective as number

3. MyDoom
Based on the name, it is already implies that this virus brings doom to its victims when this virus unleashed itself in February of 2004. According to British security firm MessageLabs, 1 in 12 emails handed by the firm was infected with the MyDoom worm. It was the fastest spreading worm, with about 100,000 instances of the worm being intercepted by the security firm, and a quarter million dollar reward for information to arrest and convict the coder was made by MessageLabs. Ironically enough, MyDoom shutdown MessageLab’s servers for several days through a massive DDOS attacks, and even sent Google into a dead crawl for a whole day on July 24, 2004! The author of MyDoom was never ascertained; however, several security firms believe it originated from a programmer based in Russia.

2. Sasser
In the Spring of 2004 a worm by the name of Sasser began its havoc. Once on an infected computer, it would spread via a Windows XP or 2000 bug that would allow it to jump on any computer also on the same internet connection. Once infected, the worm would make shutting down the PC impossible without unplugging it from the wall, and it would cause many crashes of LSASS.exe, hence the name SASSer, that would make usability almost impossible. A man by the name of Jaschan was eventually caught and found guilty after he had released it on his 18th birthday as a “gift to the world”.

1. Storm Trojan
In the beginning of 2007, the Storm Trojan infected thousands of computers. Unsuspecting users would open emails with subject lines such as:
-230 dead as storm batters Europe
or
-FBI vs Facebook

After opening the attachment, the trojan began implanting a service called wincom32 and passing data to other infected computers. All the infected computers become botnets, networks of infected computers, and each machine added to the chaos, like a virus spreading cell to cell in a human body, amplifying the reach and strength.
In September 2007, the botnet grew from thousands to millions of computers. Peter Gutmann estimated somewhere between 1 and 10 million computers were under the rule of the trojan.


I guess the morale of the story is, even if you trust the sender, you shouldn't be so quick to open an attachment, as malware can send itself through contact lists of its infectees. Keeping your antivirus software updated, scheduling regular scans and downloading attachments with a scrupulous, critical eye will keep most malware threats at bay.

-486 words

Technology: Brain rotter or amplifier?

          What is the point of technology? Why do we treat it like it like it's amazing, but sometimes undervalue its existence on a daily basis. Is it a godsend, or a curse? Does it make us smarter and more analytical, or dumb as rocks, reliant on our tech and the internet like a life support system?
          These are the questions that have arisen over the past decades as technology has slowly engulfed our entire lives. It first started with the calculators that engulfed entire rooms. Costing millions upon millions of dollars, and taking up entire rooms, they were slower than potatoes at calculating answers. Sure, it came in handy when crunching huge numbers, but otherwise, you could beat this machine out multiplying 324*293 anyday! It was at this point that some saw technology take two divergent paths: either a wasteland of circuitry, or a vast land of potential.
          Now, prior to this point in time, the television had came along to replace the radio, and was also met with criticism: it was deemed by some to be a melting pot for the brain, offering little more than a time waster to the human life, but others realized the leisureful activies the television could bring. However, with TV/Netflix times peaking to an average of hours per week for teenagers, it seems that some of the grumpy grandpas of the past may have been right on that one….But it keeps coming back to whether or not a technology is making us smarter or burning our brains out like Rambough burning his brains out in Paris over a thousand poems. I guess the internet, smartphones, and smartwatches are the latest arrivals to the game, so we should give them their shots to duke it out in the ring to figure this one out. Lets start with the internet.
          The internet allows us to have access to a wealth of information, We can Google when someone died, what someone did, who to vote for, when the next solar eclipse is, and read blog articles on the web to learn about the positives and negatives of technology. Seems legit, so lets give one point to Gryffindor…..but wait. Hold on there jockey, a little fast on the starter gun. While we can look up and find information, it also opens a world to billions and billions of hours of useless YouTube videos completely non educational, social media sites such as reddit to just layer comment upon comment on a photo of Justin Bieber being made fun of, an ever-abundance of nudity, online bullying, and enough selfies to wrap a canvas of them around the earth several hundred times. However, despite all of this, I would say it has united the world in a whole new way, and the idea of having information at our fingertips, while making us less likely to memorize who was the first president of the United States, gives us more power than ever at our fingertips.
          Smartphones tie into this realm of forever being connected. While they provide the fastest portal to information via the internet, they also provide the fatest portal to distractions: Snapchat selfies, Twitter surfing, and...well, never mind, no one uses Facebook but old people. Smartphones, while being “smart”, I think they have pushed social media over its edge with looping 7 second videos of kittens, and therefore, I think smartphones, while a tool for me every day, have been devices that have made us just a tad more….non smart. as if the “smart” in smartphone has taken the word from us. It's a fair fight, with evidence on both sides, but it ultimately takes the distraction award for me.
          Finally, the brand new smartwatches step into the ring. Smartwatches are different, because they are either seen as overpriced, a gimmick, or a fad of the dying watch, due to phones with time being everywhere. However, since 2011, they have started working their way into peoples’ lives, and for the reason that they not only tell the time, but offer better map directions, sport tracking, and call/text/app notifications right on my wrist. While most activities will alert me with a notification, still requiring me to pull my phone out, many activities can be responded to with voice, or even better, I can just swipe them away, without pulling my phone out. So while this is just a digital accessory to my phone, I believe its most valuable function is to pull me away from my phone. No longer will I have to check my phone for phantom vibrates that I wasn’t sure about, or I can save time by swiping away useless texts in class without pulling my phone out. Its a time saver, and I believe this one gives a point to using technology smarter instead of becoming addicted to it, even though the idea behind it sounds like the opposite. Using a crippled piece of technology to get basic tasks done pulls us away from the temping world that is our smartphone on the internet.
          So I guess the conclusion is, it kinda depends. Technology can make us smarter and grow with information, but it can also pull us into its depths that cause distraction and disconnection. Its up to its users to steer the car towards the direction they wish to take.





-894 words

Sunday, November 1, 2015

Hanging Out Virtual Style: Who's Down?

        What do you do when you are bored and want to hang out with friends? Spam everyone in your contacts with texts, or add a "bored hmu" Snapchat to your story? What if you could just flick a virtual switch on your smartphone to the on position, and your friends could instantly see you were free to hang out? Well, that's what Google has created with its new app that is currently in Beta, by the name of Who's Down.
         Who's Down simply does as the name implies: it asks your friends list who's down for an activity or hanging out.

 While I see this as a really neat way of finding out who is free quickly, I do see it as troublesome: for example, you might not want to hang out with someone at an exact moment, but if you ask " Who's down for a movie tonight?" and that one person messages you, you have to awkwardly ignore them, even though they know that you are free and looking for someone to hang out with. There are other potential issues too, such as our world becoming more and more socially dependent on technology to guide our social lives and define who we are, but I still think this is a very cool idea, but it is unfortunately only available via invite from Google or a friend that has the app currently, which is a very limited few. However, assuming this app is availble to the mainstream sometime in the near future, who's down to try it?

-257 words

Add Wireless Charging to ANY Phone! (even iPHONES!)

Does the idea of wirelessly charging your phone sound both like a cool feature and a convience? It sure does to me!



          However, unless you are one of the few to own a smartphone, such as the new Lumias or Galaxy S6, wireless charging is most likely NOT included with your phone :/ But turn that frown upside down with some great tech on Amazon! :D
         Wireless charging consists of two components: a wireless receiver, and the wireless charging base. The charging base outputs the power wirelessly, similar to a router outputting WiFi for a house, and the receiver receives the power and charges the battery. While anyone can buy a wireless charging base, it currently won't work on many phones due to most phones lacking the receiver. However, we can add one!

By purchasing THIS wireless receiver (there are others available for iPhone's lightning, or other micro-USB orientations (A or B)), plugging it into your phone, and placing it either inside the back cover of your phone, OR placing it on the rear of the phone with a thin case on the outside, wireless charging is now possible (pictured below is an iPhone 6 with a wireless receiver inserted between a phone and a case).

          By purchasing any assortment of wireless charging stations ranging between $15-$30, and picking up a $10 wireless receiver, for merely $25, you can charge your device with cool style and convenience, and impress all your friends!

-241 words

R.I.P Chrome OS

          There are many different laptops available on the market: some of them have touch screens, some of them have pressure-sensitive styluses as options, some have detachable keyboards, some have 360 degree hinges, some have dedicated graphics cards, some are powerful and heavy, some are lowpowered and thin, some run Windows, some run Mac OS X. There are a wide variety, but there is a third category often untouched and unheard of, and that is, the Chromebook.
          Chromebooks were a concept started by Google in 2011, four years ago. The idea behind it was that sometimes people just need low powered computers that could do word processing and web browsing, not cost a lot of money, and also be very portable with great battery life. Microsoft had seen this financial opportunity in 2007, and helped OEMs release netbooks. Netbooks, being not hefty enough to be called NOTEbooks, were, as the name implies, designed to browse the NET, or internet long hand. The concept was brilliant, but the execution was a disaster. Millions of netbooks sold to people looking for a cheap basic computer, but the problem was that the battery was subpar, but even worse, the performance of these machines were as fast as a snail trying to cross a large meadow. They flopped hard, with plenty of disatisfied customers (myself being one of them), and they disappeared within two years as a reminder that matching cheap hardware with a full-blown Windows OS was a negative experience for all.
          Fast forward 4 years, and Google decided to step back into the gauntlet. From their research and observations, the hardware was fine for browsing the web and doing basic tasks, but the problem was coming from Windows taking up too many precious resources on these weakling machines. Therefore, to fix that problem, Google said,"To hell with Windows!" and created Chrome OS. Chrome OS was built specifically with low system requirements in mind (though it runs excellently on higher powered hardware, as seen on the Chromebook Pixel II). Chrome OS, as stated in the name, is completely based off of the Chrome internet browser, but with small tweaks here and there. There is a desktop, but nothing on it. There is a "Start Menu", so to speak, and applets can be found here. I call them applets because they are not usually full-blown apps as found on a Windows machine. Apps such as Google Docs, Sheets, and Slides all are applets that run inside of Chrome tabs, and applets such as Netflix just open Netflix.com in the browser. Applets do ususally allow for some offline capabilities, however, such as Docs editing, but this introduces the two caveots of Chrome OS: the need to typically be online, and the fact that it does not feature many apps.
          Chromebooks are primarily web-based computers, so without the internet, a large part of functionality is lost. They come with little storage space typically, so most documents are stored on the cloud with Google Drive, or the user has to find a way to store them on another computer. This is where owning a Windows PC for storage and high power-hungy apps and games, and owning a Chromebook for on the go at school or whatnot, can be very useful. I would not be able to say I could live with only a Chromebook without owning a desktop for my gaming. However, my Chromebook is invaluable to me as Chrome OS allows me to get almost 11 hours of usage on one single charge, it turns on in 7 seconds, resumes from sleep instantly, can stay in sleep for weeks without usage and is still ready to go, and is just the ultimate slim fit inside of my packed school backpack. Did I mention it still only cost $199? Damn Google, you done made good for me.
          Grammatically incorrect sentences aside, Chrome OS may only survive for another 2 years, as Google is planning on ousting it in 2017. Why would Google do such a thing to this amazing OS?! Are they crazy?! They currently are dominating a quite large chunk of the low-end laptop market, and are one of Amazons most shipped laptops, but Google's going to choose to pull out now? Well, its not as bad as it might seem on paper.
         Google has recognized the negative criticism on the amount of apps on the Chromebook, as Chrome OS cannot run traditional Windows programs, so Google has decided to take the best of what it currently has in its two children, Chrome OS and Android, and merge them into a new product. The firm plans to create a new child that has the millions of apps from Android's loaded Google Play Store, and the desktop ecosystem of Chrome OS, and mix the two of them together to create......CHRANDROID OS!
Naming jokes aside, no one knows what this will be named, or what the final product will look like, but people some people are worried their current Chromebooks will be rendered update-less and left in the past like Netbooks were, while others are enthusiastic that Google MAY choose to update current Chromebooks running Chrome OS to the new operating system. I fall in the middle of that line: I believe it would be awesome if I recieved the update, but at the same time, my Chromebook will be 5 years old at that point, and considering it was only $199 at the time of purchase, I am not concerned with it not receiving an update, it will have served me quite well by the end of 5 years time. Whether current customers will get continued support or not, no one knows, but I think the future is bright for Google against its Apple and Microsoft competetors: who said there can't be three big fighters in the boxing ring?

-974 words

Sunday, October 25, 2015

Why Encryption is Most Valuable

           Recently, the federal government has been trying to put away 13 druglords for good for their crimes with leading very large drug operations. However, while they have many of them pinned with hardpressed evidence, several of them are still in trial. One of these men is Jun Feng.
           As of iOS 8, all iPhones have been encrytped securely, so that if the passcode cannot be figured out, NO ONE can have access to the data, as the passcode is the only way to decrypt the data on the device, and too many guesses will-yup, you guessed it. Lock the device permanently.
           However, Jun Feng's iPhone is running old software, iOS 7, so the device is not encrypted. What this means is that Apple, the manufacturer, has a second way, a "backdoor", to get inside his device. The feds have reportedly intercepted a signal before it reached the phone that attempted to use Apple's iCloud features to remotely wipe all the data, so the feds have reason to believe the device has critical evidence to not only take down Feng, but maybe even escalade and compromise more members of this drug case. However, to get this evidence, they have pressured Apple to hack the device. While Apple has helped feds in the past, this time, they are standing firm that although it is a criminal case, user data should remain protected. This debate is a hard choice to make for Apple, as a lot of pressure from security-concerned public, but also the government, is involved. Users want to be sure that the company they buy from WON'T give over their data if pressured from the government, because then the world might have no privacy. That is why I believe encryption is most valuable. It takes pressure off both Apple and the users. Data cannot be compromised by any parties, whether through court orders or hackers, and cases such as this one would not exist, if everyones' devices were protected with encryption.

-330 words

Paying for YouTube

          Previously, I mentioned the effects that using adblockers on the internet has. It costs website owners 6.6 BILLION dollars of earned income, but in exchange, Adblock can remove the ads from almost any site to clean up distractions and enchance the focus on the content the media consumer. However, recently Google has launched their own program to combat this loss of income, but also bring in more money for them and their content creators. That program is the Red program.
          Red not only brings ad-free viewing for $10 a month, but it also strives to create exclusive content, allow background playing of videos, offline viewing, but most importantly for a potential subscriber such as myself, it allows unlimited Google Play Music streaming. This music streaming subscription costs $10 alone, but if I could have offline/background playing YouTube videos, and also see PewDiePie's exclusive videos, that would just be AwEsOmE! While $120 a year is a lot for me, I think Google will win a minority of people, particularly old Google Play Music subscribers, but if Google throws in one more compelling reason, I, and many others, may be more pursuaded to go with this option. However, for now, I will stick to playing music videos with the screen on, using Adblock, and listening to Pandora and SoundCloud.

-219 words

Wednesday, October 21, 2015

Top 8 Predictions of Back to the Future II

In honor of Back to the Future II having visited us on this date 26 years ago in the past, I have decided to compile a list of the top 8 technology items that have made their way into our lives, that were completely predicted by this amazing film (that if you haven't seen, go watch both the first and second, then come back to digest this article!).

1. Personal Drones
Flying drones are seen throughout in Back to the Future’s 2015, and they’re shown doing everything from walking a dog to capturing images for news organizations. While the prediction that drones would be large in photography has come true, and people still walk their dogs the old-fashioned way. However drones—widely available to purchase for consumers for about $1,000 a pop—have given us new and creative ways to do everything from get the best footage, track people, or even make one hour amazon deliveries on the fly.

2. Tablets and Mobile Payment Technology
In the film, there are multiple moments where people are making payments. However, instead of using cash or check, they do it in the cool way- with their fingerprints. Yes, Back to the Future predicted everyone carrying fingerprint scanners with them (well, to be fair, businesses would carry them.) The fact that the movie predicted such tecnology would be so widespread, as it is in our current smartphone world, is quite amazing. Also, film’s prediction that wireless devices could be used for payment (in cabs, for example) was spot on, and in the last decade we’ve seen smartphone apps such as Apple Pay and Google Wallet come along, which make it easy to exchange cash electronically.

3. Hands-Free Gaming Consoles
In one scene, which takes place at the Cafe ’80s, two young boys give Marty the strangest look when they spot him playing an arcade game. “You mean you have to use your hands?” one says. “That’s like a baby’s toy!” While plenty of video games still require the use of your hands, it’s been five years since Microsoft launched the Xbox Kinect, which lets gamers control game actions using voice and gestures. Similar technology is also built into nifty devices such as the Leap motion detector, to control computers with simple movement.

4. Smart Clothing and Wearable Technology
Marty wears power-lacing sneakers and a size-adjusting, auto-drying jacket in the film—two inventions that haven’t appeared yet, though Nike has announced that they will release the power-lacing shoes in 2016, but will auction off one pair in 2015 in their usual attempt to fundraise money for Parkinson's research. What is available today though are multitudes of wearable technology products, including wristband fitness trackers like the Fitbit, “smart shirts” that measure breathing, heart rate, and sleep patterns, and infant-size onesies that double as baby monitors. And while they may be too early-stage for purchase, other inventions like stain-proof clothes and high-heels that change color with the click of an app are under development.

5. Video Phones
Marty’s future self gets fired during a video phone call in Back to the Future II. That call is not only prophetic of video chat applications like Skype and Apple’s FaceTime but also of Facebook, in that personal details like date of birth, occupation, political leanings, and hobbies are shared electronically.
Of course, the movie gets a few big details wrong—like the widespread use of fax machines.

6. Waste-Fueled Cars
Although you can’t yet buy a vehicle with a fusion engine, like the one in Doc Brown’s DeLorean, the scene in which Doc uses garbage to power his car (well, technically the flux capacitor) is possibly coming. In fact, Toyota is promoting its new hydrogen fuel cell car—the Mirai—with an ad campaign featuring Back to the Future actors Michael J. Fox and Christopher Lloyd.
Hydrogen-powered cars are lauded as environmentally friendly since they convert hydrogen and oxygen to electricity, with water vapor as a byproduct. That eco-friendliness is partly offset by the fact that fossil fuel or natural gas is typically consumed to create the pure hydrogen in the first place, but scientists are experimenting with solar and wind-powered generation to combat this negative effect, but regardless, kudos to the director.

7. Hoverboards
Tech-startup Arx Pax has successfully created a working, real-life $10,000 hoverboard called the Hendo, which pro skateboarder Tony Hawk has personally tested. Lexus has also created a hoverboard using a different technology involving superconductors. Unfortunately, both versions can glide only over conductive surfaces, so you probably won’t be using one to escape bullies in your town square any time soon until all of the roads are made of metal. ¯\_(ツ)_/¯

8. Video Glasses
One correct prediction from the film’s vision of 2015 is how personal technology would disrupt the American dinner table. In one scene, Marty and Jennifer’s future kids ignore their families, instead watching TV and talking on the phone using futuristic glasses.Sounds very familiar to our current generation to me. Though smartphones are the real source of distraction today, high-tech video goggles keep getting more advanced and popular. Google has stopped selling Google Glass (at least for the time being), but pairs are available for purchase on eBay and can be used to watch streaming video, record images, and search the internet. Microsoft is also working on its HoloLens, which plan to bring augmented reality to the business world, merging that of the real world and virtual world into one.

-910 words

Sunday, October 18, 2015

Amazon Can Throw Fire Too

          Amazon is a great online buying center, full of anything and everything from toilet paper to refrigerators. Amazon tries its best to provide many services in an all-in-one package, and does a great job with it. However, to provide a great shopping experience, Amazon has to start some lawsuits.
          Amazon first noticed hundreds of thousands of fake reviews, either 1 star or 5 star, springing up over the place, and to solve such issue, started by suing corporations such as BuyAmazonReviews or Fiverrr. Companies such as these take in money from companies wanting good/bad reviews distrubuted, and they proceed to pay workers to write anonymous fake reviews to raise products up or bring produts down in Amazon's satisfactory ratings.
          However, suing these companies has not worked, and thus, Amazon has started a lawsuit against 1,114 people that have been acused of writing hundreds of fake reviews each. While Amazon does not yet know the names of these anonymous users, Amazon hopes to lead by example in showing that people leaving fake reviews will be punished when found. Amazon hopes to keep its Garden of Eden pure for customers to make easier purchasing decisions through this seemingly harsh enforcement, and I agree with their decisions. If I am buying a really expensive product, every 1-star review I see greatly sways my decision, so knowing that there are only legitimate reviews on Amazon will help me make the proper, unbiased decision, and I'm sure many more buyers will side with Amazon's decision due to the straight-forward experience it brings to shopping at their site.

-264 words