Monday, August 27, 2012

Accessing MySQL Remotely With MySQL Workbench

Command line be darned. Visual tools are there for a reason and if you honestly find using the GUI to be easier in visualizing complex queries then you should use it. It also helps in saving time when you want to scan the database to see what’s in it. But that’s not really the point here. A few days ago I began developing some major systems for internal use at the workplace and it was finally time to let the SQLite databases go. Not so much because of data storage needs but mostly because of the need for SQL side validation of the Foreign Key constraints and also to ensure that I didn’t have to do a lot of extra work to ensure the data integrity stayed intact. But I digress. The problem was that I needed to connect the desktop application and MySQL Workbench as well to the MySQL database that was on a server and there’s really nothing on the internet that actually addresses this problem directly. The best ‘alternative’ to this is to actually use a web service to send the data back and forth. Since I’m not going online for now it’s not really needed for me to actually be transferring data like this.

In order to this I got my own virtual server setup using Ubuntu Server (ooo he’s taking the easy way out. No I’m not. Wikipedia uses Ubuntu Server. I’m using the best tool for the job) and my way of access into it is through SSH. For the record I have no idea how I got SSH into powershell and I suspect it happened at some point while I was installing the libraries for cygwin. Anyway, after SSHing into my server I checked around, discovered that the network admin had already installed LAMP and phpMyAdmin and thus my MySQL instance was up and ready. This would end up causing more problems for me than I had anticipated. At this point I’m not willing to go reverse all the steps I took to find out which ones worked completely right but I know which ones are absolutely necessary and can give options if the steps don’t work properly.

So the first thing you want to go do is actually read the manuals on how to create and manage user privileges. I’m in a bit of a rush here so I’ll add the code later but the main steps are as follows.

First up, create a user apart from root who has all privileges. Later when you learn the full privilege list then you can actually revoke what you don’t really need but for now I’m not entirely sure of what I need and don’t so I granted all privileges. The code went something like (no I’m not being at all precise over here.)

CREATE USER newusername IDENTIFIED BY ‘type your pass here with single quotes’;

GRANT ALL PRIVILEGES ON *.* TO newusername IDENTIFIED BY ‘your pass word’;

FLUSH PRIVILEGES;

If you read the manual you’ll find this creates a user that can basically be accessed from any host. The reason for me wanting to do this is because I’m going to be needing a user that can be accessed from any machine inside the company as I’ll be making a desktop application that needs to access the db.

Exit the MySQL server. The next thing you need to do is turn off MySQL accepting only local requests. For this, open up the my.cnf file found under etc/mysql/ using sudo vim my.cnf. What you want to do here is comment out the lines bind-address = 127.0.0.1 and skip-networking. Easy way to do this?

:%s/bind-address/#bind-address/g

:%s/skip-networking/#skip-networking/g

And that’s it. I think we are ready. I did go to the extremes of forwarding the 3306 port using iptables. This is the only thing that is really server specific and you’ll want to refer the manuals of your particular distro. I don’t think this step is necessary so skip it for now but in case the actual step of accessing the db through the workbench or app doesn’t work you’ll want to come do this (or the equivalent of this if you aren’t using Ubuntu Server)

sudo iptables –A INPUT –p tcp –dport 3306 –j ACCEPT

sudo iptables –A FORWARD –p tcp –dport 3306 –j ACCEPT

sudo iptables-save

Hopefully this step works without needing the iptables step above.

It’s time to connect MySQL workbench to the db. Here’s where I made the biggest mistake. I assumed that since I actually connected to the server through SSH, I should use that method to connect to the db when using Workbench. Turned out that I was wrong. Or at least, not wrong, but it turned out that after all of this, using a standard tcp connection worked fine. Give the server name as the ip address of the server you are connecting to. XXX.XXX.XXX.XXX that kind of thing. Port should ideally be 3306 (by the way if you don’t think your mysql instance is running on port 3306, highly unlikely as it may seem, just type mysqladmin version into the command line of your ssh session and check the results. There’s one that says port. That’s your port. If your port is different then change everything to match it. Doh!)

After you put the ip, put the username and the password that you created and test your connection.

You’re welcome.

And that’s how you connect a desktop based application or a MySQL Workbench to a mysql database that’s on a server.

Sunday, August 26, 2012

Liveblogging tools: Begging for Pricing Disruption

I know I said I would post on the conversation I had with the compere of the Etisalat event but there’s something that I need to get out of my head after an experience I had today. There was a time that I would go to tech events, live stream events, and update my blog through a live blog plugin. When I started out there was an EXCELLENT albeit ad supported tool for live blogging which was cover it live. Unfortunately, they discovered that free wouldn’t cut it and went to paid while leaving a free tier that has some strange restrictions on how many user actions can be performed on the live blog. That strikes me as a strange thing because that might mean that my live blog is not permanent. Once I go above the threshold for a particular event it’s shut down and I have to pay to ensure that it stays visible to future visitors of my blog.

So then I decided to look for some free alternatives out there. The main ones that I came across were the Wordpress plugin for live blogging, a site called Blyve and Wordfaire. There are many alternative sites though I believe that ScribbleLIVE and CoveritLive are the only two ones really worth considering.

What’s wrong with the other ones? Wordpress plugin is not really a liveblogging tool. In the sense that stuff doesn’t get pushed out to the viewers. It gets polled which isn’t the best solution if you are hosting it on your own server. The second option for that is to host your own meteor server which handles the pushing to the viewers but again, live blogging isn’t just for techies and therefore, the solution should not be tech intensive either.

Wordfaire is nice, but it’s in beta, isn’t all that feature rich and the worst part is that the embedding features are pretty bad. Not only do you have to customize your embedded live blog but as the event goes by you won’t find all the messages in it. It shows only a certain number of messages after which if you want to see the rest you have to visit the Wordfaire site itself to see the full list of messages. I imagine this is for advertising purposes but then that’s why I don’t like the idea of completely free either.

The best alternative I have found is Blyve. It isn’t almost as comprehensive as CoveritLive but it comes really really close. In the free tier you get 500 uniques per month. For a blog that sees only about double that activity for the entire blog for the whole month that seems like a pretty good deal. But the problem again is, what happens to the day when my visitors become substantial for my blog posts, but the number of live posts I do isn’t enough to justify paying a not insignificant amount monthly to use a live blogging service that still offers limitations on the number of actions/uniques per month it can serve?

The Per Instance Pricing Method

From everything that I’ve said I’m willing to bet that between those who pay monthly and those who use the free tier is a set of customers that are willing to pay some amount but use the free tier simply because they can’t justify paying a full monthly cost. What if any one of the above live blogging companies (I’d vote for CoveritLive and Blyve) came up with a model where people could purchase an instance of the liveblog for that particular post and pay a certain base amount based on the traffic that they expect to receive. If they receive substantial traffic after that they would receive a warning to pay for the next tier for that instance. This is unlikely to happen because unless your post is a really special event with global interest that gets voted to the top of reddit and hacker news, the traffic that you’d get would be fairly easy to estimate. So, step by step here’s how the payment would work

  1. I need to host a liveblog for this month’s Refresh Colombo. I visit the liveblog site and pay $4 for an instance of the liveblog which can host up to 500 unique visits for the duration. $2 for every additional 200 uniques I expect.
  2. The live blog is available and life goes on.

But of course what would happen once the event is over? If it’s a one time payment then the liveblog host bears a cost to keep that viewable in their system right? Here’s the cool part. Once the liveblog is complete, offer a snippet of HTML where all the content from the liveblog gets hosted across in the my site. That means all I have to do is copy that HTML and replace the iframe embed code in my site once the event is complete. This isn’t too tech intensive to be a problem and would solve almost all the problems for both parties. What problem does it not solve? The hosting of the pictures. If I want to host my pictures on CoveritLive or Blyve then they should charge me on a monthly basis OR better, move it across to Picasa or Flickr for me and give me the new HTML code which links to those pictures automagically. Boom.

This serves two main purposes. One is that I would be able to have a pricing that fits my needs and I’m sure, the needs of many people out there. And on a second equally important note, I would have some form of ownership of my data. Maybe the service doesn’t have to be Flickr or Picasa . Maybe they could offer to let me download the pictures so I can upload it to my own FTP if I’m at that level of tech savviness. And if they’ve named it right (for example, according to the time each picture was uploaded relative to the liveblog timeline) then I could simply do a find and replace to replace their URL with the base URL of my FTP.

This probably seems a little too complicated but at it’s most basic level, I pay for an estimated number of users, I get a new bit of HTML code to embed and I get to keep my photos for free in services I already use, or pay a small fee to let the live blogging company host it for me.

C&C is welcome.

Friday, August 24, 2012

Quick Post: Solution for YouTube Videos Not Loading While Paused

Play a game while you wait for your video to load sir

This is probably not something new for most people but it’s been something that’s bugged me for a while. I’m not on a fast net connection at home and when I watch YouTube videos I usually pause it and leave it to load. As of recent times I’ve noticed some videos not loading while being paused. Which really sucks. It’s not a big problem in the sense that I workaround it by letting it play in mute while I do something else but it’s a problem nevertheless. I don’t know what’s causing it but I do seem to have found a decent solution.

After searching on Google I found two Google group posts which led me in the right direction. The first was a confirmation that I wasn't alone. And the second one had a solution from a Googler. The solution? Change the quality of the video. Now you obviously don't want to do this while it is loading so ideally you want to do this right at the start which is what I did and I can confirm it works. What I did was to switch from 360p to 240p at the start, wait for the video to start playing and immediately switch back to 360p. Maybe it's my imagination but the loading seemed to be much smoother after that as well. Hope this helps

A Talk With an Etisalat Rep and Some DC HSPA+ Perspective

I’m not entirely sure I should call Abdul a rep but let’s just say right off the bat that rep is purely a term that I have given him. And like I mentioned during the live stream, I would relate most of the stuff I spoke to him about. It’s not a lot but it was insightful although there’s still an empty spot I need to fill by giving a call to the Etisalat hotline. Shame on me for not doing my research. First things first,

A quick recap of DC HSPA+

At the time of writing this post I’ve had the chance to be exposed to two presentations by Etisalat on the same topic, tested the new connection in more than one scenario, two locations and I think that makes it a little fair to give a small commentary and summary on what this is all about. Essentially, by allowing two simulataneous connections to originate from the same source the speed that one can achieve gets doubled. Both the practical and the theoretical speeds. But no one cares about the theoretical speeds right? Caveat though, there are three requirements that need to be fulfilled to achieve the new speeds. First up is the dual carrier compatible device. Second, is an ISP with the infrastructure to provide the speeds without choking the network. And finally is the capability of the servers you are contacting (ex: YouTube’s) to serve you at the max speed that the device is capable of.

The Rationale

When speaking with Abdul I was curious as to how they would be marketing this package. Let’s face it. Broadband is good enough to stream videos without a problem. YouTube videos at 720p and above can give issues but up to 480p is fine and honestly, that’s really good enough usually for most fail + cat videos. Even for the olympics, 360p was absolutely fine for a 21 inch screen. So why would most people need double the network speed at quite possibly more than double the price?

The first answer that came through was that this was being targeted as a family package kind of thing. This was in fact reinforced during the presentation at Refresh Colombo when the presenter mentioned that families would be able to share this connection without experiencing a drop in quality of their individual experiences. This also made sense with the fact that in the slides and the promotions the Etisalat groups were carrying around DC HSPA+ compatible MiFi units.

But that gave room to the question of corporate packages. Corporates don’t seem to be amongst the main target groups for this kind of thing based on what I understood since they are more based on the fixed line connections. There is in my opinion another avenue for this tech in corporates which is the small (like <10 people) businesses starting up these days. Connections like this would be ideal for ad hoc free lance partners to have fast internet without being burdened by fixed line issues. Of course, I think I’m stating the obvious here but I just want to open it up for discussion.

Pricing & Concluding Thoughts

When technology isn’t being geared towards the individual you have to imagine that it isn’t going to be cheap either. After all, it’s for the group and therefore the per individual cost may stay the same. Based on that, I’m guessing since these packages are probably aimed at groups of 3 people and more the price should be roughly 2.5x that of any comparable package. The equipment should also be about 3-6x more expensive. The question though is whether or not it’s worth it. If the internet works as advertised, I’m inclined to say it is based on how much data is included under each tier. Looking at what Etisalat has right now, the Rs. 1,500 package gives a user 12gb before requiring extra payment for each Mb (20 cents per Mb). SLT gives a user 25gb  at 8Mbps for that amount. To get to 25GB you’d have to pay an extra Rs. 2662.40 to Etisalat under the current connection. Add that to your SLT bill and you would be Rs. 700 away from the Web Pro package that gives 60GB at 8Mbps.

Speed does matter, but with speed and plans to create family oriented packages I think Etisalat would have to choose to get rid of their existing packages and tailor some new ones since all that added speed is going to amount to people completing their quotas really really fast. 12GB is honestly nothing at all. My smartphone usage is 25% of that per month usually so one can imagine what my standard internet usage is like. And as for SLT’s quality of service, beyond the FUD you see on the internet I’ve actually been hearing good things about their newer packages which means that in a battle for pricing I’d still not go with Etisalat. Of course one could say this is Apples and Oranges but given that it’s a family package oriented thing I don’t think the fact that I’m comparing fixed line vs mobile broadband really comes into play here.

The one other concern I do have of course is coverage. I had a chance to play with an Etisalat DC HSPA+ connection at Refresh Colombo yesterday and the maximum I could pull from it was 0.3 Mbps!! That’s ****!! But to be fair we had some crazy rain but then again the rain had died down pretty much by that time so I don’t see how that really works. After all, my dialog dongle was clocking 2Mbps before I knocked it out of the USB slot thereby ruining the rest of my test.

So there you have it. A full evaluation of the Etisalat DC HSPA+ ‘initiative’. In summary, the speeds are real when they do work, the applications for individuals apart from journalists are minimal enough to not make the jump and the charges that could surround this are also a little doubtful BUT I will not make a final call till I call the hotline and hear what they have to say. So, there will be an update to this post but for now this is it.

Wednesday, August 22, 2012

Refresh Colombo August Meetup

It’s been a while since I blogged a refresh colombo meetup so this should be fun. I’ll be giving one of the three presentations which I’m really looking forward to. For the uninitiated, Refresh Colombo is a monthly meetup open to anyone interested in tech. And when they say interested in tech it can be from any angle at all. Not just the deep in tech programmer level style. Like the site says. Bring anyone you want with you as a guest. Even your grandmother. Yes. The sweet lady who takes pictures with the iPad you gave her to stay in touch with you. Jokes aside, this time’s Refresh Colombo is looking to be awesome and I am just as pumped up about the other two presentations as I am about my own. What’s on the topic line?
I’ll be presenting on Building Software Products Anywhere. Since joining Anythng.lk I have experience a creative high like never before in my life and I am building and rolling out more products over time and learning more about good software than I ever have before. This is strange because my original Job description doesn’t really call for anything related to programming. And more than that I’m planning on being responsible for a shift in how the company uses tech to complete its day to day work that will eventually transform this company into a tech startup to some degree. I want to share this experience with other software devs out there. Why? Because I believe that every software developer who wants to love what they do should be allowed to experience creative highs by taking charge of building products. And since not everyone can afford to be an entrepreneur, I want to share how you can still engage with building software products in the most unlikely of places.
The rest of the topics as per Refresh Colombo.
Visual and Creative Thinking – by Shiran Sanjeewa
Shiran Sanjeewa is the Creative Director at Elite-web-studio a Manchester Based Creative Agency. He possess extensive International Expertise on Branding, Websites, Mobile Applications, UI/UX and Online Marketing. In 2012 he founded “Shiran Sanjeewa Associates” a Sri Lankan Startup Branding & User Experience Consulting firm, now serving Silicon Valley clients with the user experience design on their software and hardware products.
I am really looking forward to this topic given how much I really care about User experiences. And from a person who has an impressive background like this this talk should really be a cracker.
Dual Carrier Cellular Networks: A Practical Outlook – by Damitha Wijewardhana
Damith Wijewardhana holds an electronics and telecommunication engineering degree from University of Moratuwa and a MBA from Postgraduate Institute of Management. He is also corporate member and a Chartered engineer of institute of engineers Sri Lanka and a member of institute of electrical and electronic engineers, USA. He has 6+ years of industry experience in Radio Network Planning, Optimization and related new technologies locally as well as internationally.
Sounds familiar? I assume this talk will have a lot to do with the recently announced DC HSPA+ network by Etisalat. And based on the actual speed tests that were made yesterday and by Shazly’s impressions of using it in Dehiwala I ‘m also assuming that this talk will probably give a bit of a rational side to the whole hype of the speeds of a dual carrier network and maybe a bit of what a network of this nature might include in terms of costs. Which reminds me I should give a call to Etisalat’s hotline to find out details of their packages for DC HSPA+.
Look forward too a live blog from me although I obviously won’t be able to blog my own topic. But in place of that I might livestream my own talk. And I will definitely blog about it as a follow up post too. Stay tuned for more information!

The Launch of South Asia's First Dual Carrier HSPA+ Network

Update: The full liveblog can be found here

Today is the official unveiling and opening of South Asia's first Dual Carrier HSPA+. I've tried to research a little bit more than what I know on the subject. The essential point of the whole thing is that you get theoretical speeds which are 6 times as fast as standard HSPA+ connections while in reality getting double the speed. Of course this is just my basic knowledge. The slightly technical bit about it in as much of a layman form as possible is as follows,
3GPP Release 8 defines dual-carrier or dual-cell high-speed downlink packet access (DC-HSDPA) to allow the network to transmit HSDPA data to a mobile device from two cells simultaneously, doubling achievable downlink data rate to 42 Mbits/s. Dual-carrier operation is characterized as simultaneous reception of more than one HS-DSCH transport channel. Dual-cell operation may be activated and deactivated using HS-SCCH orders.
Apologies for the sketchy post but I needed to get a quick introduction done before I run off to the event right about now. Will update more. The more important thing right now is that you stay tuned for the live blog that should start updating in half an hour!



Tuesday, August 21, 2012

Problems With C# Datagrid Binding Combined With Combobox

Here’s an unexpected issue I ran into today that I am currently at a loss on how to solve. When you bind a DataGrid and a ComboBox to the same source and edit the items in the DataGrid you end up with a problem of the placeholder item in the DataGrid finding its way into the ComboBox . I haven’t found a solution yet which is irritating because I don’t need a problem like this finding its way into my system at this point especially. I HAVE TO DELIVER IT TOMORROW!!!!

I found the details of this problem on the WPF Toolkit Discussion board and the only useful information it provides me is to inform me that this is the result of a bad design decision on the part of the WPF team at Microsoft.

From the discussion board:

Hi superlloyd,

Ok, so it sounds like you've got your DataGrid and ComboBox bound to the same collection, and since the ComboBox doesn't know what to do with the NewItemPlaceholder, it crashes.  NewItemPlaceholder is something which we add to the DataGrid.Items collection to represent the blank AddNewRow in the DataGrid.  However, NewItemPlaceholder should not be added to the DataGrid.ItemsSource (just the Items collection), so if you bind your ComboBox to DataGrid.ItemsSource, then this should solve the problem.

If for some reason that doesn't work, a less elegant solution would be to have two separate collections, one for DataGrid (which includes the NewItemPlaceholder) and one for ComboBox (which does not).  Whenever anything is updated or added in the DataGrid's collection, you can manually make those same changes in the ComboBox's collection, which should give the same appearance to the end user of the editing the ComboBox's collection through the DataGrid.

Thanks!
Samantha

This is just a bad decision. I’ll probably dig around the DataGrid code later and see what I can do but it’s a pity given that it’s such an essential control. I’ll probably take the ugly and terrible approach of having a separate collection for the ComboBox because I need to get this done asap!

Monday, August 20, 2012

Cloning Objects in C#

Cloning an object in C# is a surprisingly less straightforward topic than one might think it to be. Cloning is especially needed in crud applications where you want to have the ability to reset data for a form without contacting the database again and without clearing the form. Before I talk of how I used it I’ll just share how I managed the cloning.

Memberwise vs Deep Cloning

The first discovery I made was that there are two forms of cloning available. Shallow and deep. (Are those the real terms? I think I should rebrand this blog as a noob’s take on programming). Shallow cloning aka Memberwise clone is generally enough for any situation. The problem with directly saying

Object a = b (where ‘b’ is of the same Object type as ‘a’) is that as far as basic OOP concepts go you are simply copying the reference. Therefore there’s no point in this clone since any changes simply get reflected in ‘a’. To do a b.clone() call you can implement the icloneable interface but that’s actually an unnecessary step and I’m still trying to find out why bother with it

TODO: Research why one should use the icloneable interface

Back to the matter of memberwise/shallow vs deep cloning. Memberwise cloning basically takes all the individual members of your object and and copies it into a new object. The only real problem here is that you have to make an expensive cast upon returning it since it ends up being returned as an object. Well, relatively expensive anyway but since you realistically won’t be casting a billion objects in a single operation this probably isn’t too bad. The real problem in this method is when an object contains instances of other objects as part of it’s parameter group.

What happens inside when cloning memberwise.

MyClass toBeClonedTo = (MyClass)objectToClone.MemberWiseClone(); results in:

MyClass tobeClonedTo = new MyClass();
tobeClonedTo.paramA = objectToClone.paramA;
tobeClonedTo.paramB = objectToClone.paramB;

But what happens when there is an object inside the object?

toBeClonedTo.MyClass2instance = objectToClone.MyClass2.instance

This ends up just copying the pointer to that object. Which means that if any changes are made to that particular instance it affects the cloned object as well which means whatever you did can be thought of as pretty useless.

Deep Cloning

Deep cloning on the other hand is something that has to be implemented by the programmer and is about cloning every piece of information. Cloning. Not ‘Cloning’. Essentially all you have to do is return a new object of MyClass type with the variables instantiated the way you want them to. If you are in control of all the classes, you might as well call the memberwiseclone method in the other objects as well (eventually everything is made up of basic types) and put those into the constructor.

public MyClass DeepClone()
{
    return new MyClass(this.paramA, this.paramB, (MyClass2)this.MyClass2instance.ShallowClone()
}

Where the MyClass2 will have a method called ShallowClone() that calls the this.MemeberWiseClone() method. Do note that the above example I made of memberwise cloning was incorrect code and was just to illustrate the concept.

And there you have it. Deep Cloning for objects that have instances of other objects in them as parameters. Shallow Cloning or Memberwise Cloning for Objects containing only primitive types.

Thursday, August 16, 2012

Passing a string as a parameter in dbcommands

One of the final steps in building my first major WPF application was of course the database updates. Yes this was the final part. I prefer keeping the data retrieval User Experience designing at the start of a project itself. Makes a lot of internal programming issues so much easier. But that's a muse for later.

The more important thing is that I was seeing a bit of an odd issue with the data insertion for my system where the text based columns weren't getting updated. The code that I was using seemed fairly straight forward too.

sa.UpdateCommand.CommandText = "UPDATE Deal_Category SET CategoryName = ? , High_Cutoff = ? , Low_Cutoff = ? WHERE ROWID = ?";
sa.UpdateCommand.Parameters.Add(new SQLiteParameter(DbType.String,  category.categoryName));
sa.UpdateCommand.Parameters.Add(new SQLiteParameter(DbType.Double, category.high_cutOff));
sa.UpdateCommand.Parameters.Add(new SQLiteParameter(DbType.Double, category.low_cutOff));
sa.UpdateCommand.Parameters.Add(new SQLiteParameter(DbType.Int64, category.rowid));
sa.UpdateCommand.ExecuteNonQuery();

When the update was complete I discovered that the column had become empty. How did that happen? Turns out that if you pass a string as the parameter, it calls the overloaded method that specifies a string input to be the column name. Or at least that’s what I think it was doing since I didn’t have much time to read the documentation completely there. But the important thing was that my value was being passed in as null. After looking at the method I wanted to be using I noticed that it expected an object as an input. Shot in the dark; I cast the categoryName to an object and it worked.

Not entirely sure if this is the canonical way of doing this but thought I should share since it’s a rather unexpected error that popped up.

Tuesday, August 7, 2012

Deploying Databases in Click Once Applications

I'm currently in the process of building an application that needs an sqlite database deployed with it for use inside the office. This was the first time I'd need to deploy a predefined database so I couldn't even run code to create the database on first run. I needed to deploy it with data. I could have deployed it as a .csv file which was then inserted into the sqlite db that is created during the first run but honestly, if I was going to deploy a csv for that then why bother? I might as well deploy the sqlite file with it right?

This was my first time deploying a clickonce application with a file so I had to do some digging around to figure out what the best practices for doing so might be. The first step was to include the sqlite db file in the build. Answer to that was found in the MSDN How to section for specifying which files are published in an application. The relevant part can be found under marking files as data.

  1. With a project selected in Solution Explorer, on the Project menu, click Properties.
  2. Click the Publish tab.
  3. Click the Application Files button to open the Application Files dialog box.
  4. In the Application Files dialog box, select the file that you wish to mark as data.
  5. In the Publish Status field, select Data File from the drop-down list.

Next comes the code to access the database. Keep in mind that what you've written probably follows a certain folder structure which needs to be maintained. 

Not a big deal but it's a two step process that needs a bit of poking and digging around to find. Unless you happen to use Mage. In which case you should be fine. 

Monday, August 6, 2012

My first subtitle from Universal Subtitles

Universal Subtitles aka Amara provides an amazing service to make videos on the web accessible to people who are deaf and in general people who don't have access to sound while watching videos. The service aims to provide a crowd sourced method of subtitling videos which is frankly speaking, a lot better than Google's current auto subtitle method. I imagine that someday Google will reach a level of near perfection but the problem will be that it will still be, only Google's services. Other services won't be as accessible and there'll still be limitations in context that can be found only with services such as manually entered subtitles. Unless we of course enter singularity style artificial intelligence. Which is probably going to take a while given the current state of AI which as John Siracusa quite rightly describes as being less than that of a roach.

Enough rambling! Join the movement on Amara and do some good for the world. I personally have given up the world of memes on 9gag in order to devote time to subtitle a video or two at least each week. Whether or not I actually stick to this remains to be seen. But it certainly is a worthy goal to go for.

I'll write a review of the full service later but without further ado, here is my first video that I subtitled from YouTube.


Sunday, August 5, 2012

The poor state of Blogger's web interface

When writing my post on Slices for Twitter (for Android) I came to realise just how terrible the web interface for creating posts in Blogger is. It doesn't get any better in the Android version. It boggles my mind. Google is the same company that wants to push a web only usage in PCs through their Chrome OS but they can't seem to create good content creation interfaces in their own products. To give an example of just how terrible it is, this is what my original post of Slices for Twitter looked like on my blog

































Like that's exactly what it looked like. BOXES!!! Even better? How did I embed this video? It's Blogger, which Google owns, embedding videos from Youtube, which Google also owns so you'd think they'd talk to each other really smooth right? 

Wrong!!

The picture above is me putting in the URL for Slices for Twitter and that's the garbage I get as a result. Some of those results look really odd by the way. Why on earth does Google not recognize a youtube link and give me the video straight off???? Why isn't there at least an option to paste a link in??? Oh and good heavens, let's make sure we do not under any circumstances add an option to paste embed code in...

That's one scenario. Then there's the day I pasted my XAML code in. Sadly, the blogger interface was incapable of converting the special characters to their relevant HTML codes. That ended up breaking the whole post in return. Even better? When I went into the HTML and manually corrected it to say & l t ; (I added spaces because I don't know how it might get represented otherwise) the Compose view didn't show me the actual character. No. It showed me HTML code. 

Blogger has come a long way. The dashboards and Post settings and stuff are all pretty. But that toolbar on top and this big text input area in the middle is still stuck in the days when I first used Blogger.

Dear Google. 

You know what to do.

Thanks. 

Saturday, August 4, 2012

Slices for Android





Very nice looking and all of that. I agree with the review that I'm not entirely sold on the idea of slices. It would be nice if I could convert my lists into slices. A lot of the consumption mechanism feels like a hybrid of  Twitter lists and Google+ Circles. Slices even decided to implement their own discovery tab with various topics containing various slices of people curated by them for people to 'slice'

Is it just me or does it sound like a really really really bad idea for them to come up with a client like this in the wake of Twitter giving the cold shoulder to developers who might develop apps that take eyeballs away from the streams controlled by Twitter? Like the discovery tab. This is like a big part of Twitter's financial future and right here we find these guys implementing their own which will probably not see any promoted tweets or anything of the sort.

Kudos on the great design. But honestly, the developers are either really brave, really stupid or just plain Trolls.

Slices for Android on the Google Play Store

Friday, August 3, 2012

Steve's Apple vs Tim's Apple

Time posted a question on their Google+ stream asking users to weigh in on whether Tim Cook's Apple was failing after the great man, the late Steve Jobs, passed it on to him. Steve Jobs of course will be remembered for passing on the advice of not to do what they thought he would do in a similar situation. Having said that,

I won't compare Tim's Apple to Steve's Apple. But I will look at Tim and the legacy he was left behind. As much as people want to harp on quarters, one has to admit that Tim was left behind quite possibly one of the most difficult legacies to handle. The iPhone was entering its 4th generation and there was now very little in terms of visible or truly revolutionary advancements going on. The Mac book pro was also at a hardware apex. Everyone was guessing (and correctly so for once... which just goes to show) what the next features would be.

Tim was basically left a company that was at its apex which meant that competitors had started catching up and Apple's advancements had pushed its competitors into taking revolutionary steps themselves. Revolutionary steps lead to lots of attention in the tech sphere and this attention can have a ripple effect causing consumer curiosity. And most importantly, in the midst of revolutionary steps being taken by companies, people are starting to look back at Apple, the company that started a lot of it to see what it will do in return. People don't get it. Apple is at that stage where they are perfecting things, not revolutionalizing. As such, people are looking for something that's unfair. You can't keep revolutionizing something without alienating your fans who find that their 1 year old purchase no longer has any support.

Having said that however, Apple is probably on their last year or so that they can continue to touch up what exists. If they haven't started yet, it's time for them to look at what they can do to really kick up a storm and get people talking and saying that the Apple spark is back. Many people forget that while Steve Jobs was a brilliant man, his most brilliant revolutions came about and from that point the company iterated to make things perfect. Given the design of the iPad I still consider the iPad an iteration of iOS devices. Revolutionary though it may have seemed it was still essentially an iteration.

Essentially, it's too early to judge Tim Cook's Apple even if there are numbers that show a slow but sure shift in balance of power. Within the next 18 months, if Apple cannot introduce something to take the general consumer market by storm, then we can start the judgments. 

Amazon's Awesome Customer Service

Last night my Kindle broke. The reason for that could be an entirely separate blog post on its own given that its a mystery. What happened though is I didn't use my Kindle for just under 3 weeks and during that time it had discharged completely. When I went to get my Kindle yesterday to charge it so I could finish reading one of Terry Pratchett's books I discovered that it had the charge sign but there was an e-ink 'stain' on the bottom left. Like the e ink hadn't been able to refresh that part of the screen. I charged it completely, switched it on and was dismayed to find that the stain remained on the screen. This was really really upsetting for me since I love my Kindle and in the case of this one it was the newer 6" model and I hadn't even used it for 6 months!

I poked around on Amazon to find out how to troubleshoot my Kindle and came to the obvious conclusion that my screen was broken. This was step 1 of my really awesome experience with Amazon. It didn't take me a lot to ascertain what had gone wrong with the device. Step 2 was that the return and repair method was very plainly linked and explained. The main numbers I needed to call were given and this is where the best part of my experience began. Upon calling Amazon I had less than 30 seconds wait time upon which I was received by a customer rep named Brian. Whether or not this was his real name is irrelevant to me because from that point on the service I was given was nothing like I had ever experienced before. I explained the issue to me and his only real question about the problem was whether or not I had dropped it. I said no and that it had been in its case the whole time and so he agreed that the screen was probably broken.

To cut the long story short, at this very moment, a brand new Kindle is on its way for me; when Brian asked me for my delivery address I found it too good to believe that this was actually happening and was over the moon. To send me from over the moon all the way to Mars was that the email I got about an hour later stated that a brand new Kindle would be shipped over, any customs taxes that I might incur should be faxed over to them so they could handle it and finally all shipping costs to send the malfunctioning unit back to them would also be refunded if I gave them the tracking code.

Throughout this whole experience was the feeling that this customer service rep knew exactly what he was doing and more importantly, knew every relevant detail about me. Even when there was a minor hiccup due to the fact that I was calling the US number for a Kindle that had been obtained as a gift from the UK he just said give me a little bit of time and sorted it out transparently by informing me that he'd fill a non standard form.

One call. One word to describe it.

Wow.

Wednesday, August 1, 2012

Google Calendars and Change logs

Since I never use calendars with anyone else the idea of having a version control or a change log with my Google Calendars never occurred to me. But speaking to a friend who just discovered that a whole set of calendar events added by a co worker had gone missing was a big problem. I've had this issue in the past with Google docs where on rare but annoying occasions, data suddenly went missing after the most recent edit. Like the data that put in with the last edit just went missing but showed up in the version history.

Aside from unpleasant vanishing data, there's the real concern of data conflicts when collaborating on a calendar. Appointments for a particular day are being put together and by mistake someone wipes out the 2 PM board meeting with the potential investor, and you are asking for a lot of trouble. Point is, whether these are edge cases or not, Google Calendar has the option for sharing and collaboration. And anything with collaboration SHOULD have version tracking built in with it. This is NOT optional.

Looking for threads currently where this feature has been requested. And I found a lot of them in the product forums for Google Calendar. Wonder why the Google team hasn't responded. Going social and all of that, you'd think that would be a little important at least. 

Google and Microsoft? Goodbye Apple?

Slightly insane thought... but wouldn't it be crazy if Apple goes out of fashion and Microsoft and Google are the in thing? Why do I even ask that? The apple eco system is big and has a lot in it. But the more I look at it the more I think yea.... but nothing exciting gets added anymore. It's just feature updates and catch up features.

I like Microsoft. I always have. And their eco system is tying together slowly but surely and very nicely and has a lot of things growing in it. Like I could see my coding going social and being connected with github and irc withing visual studio express. I don't see apple ever doing stuff like that. Like you have to be some kind of elite to be on their share menu.

Google has an amazing eco system that's looking like it wants to add exciting stuff everyday. The way they've redefined their UI, whether or not it felt more metroish is irrelevant, shows they can be bold. The iOS interface on the other hand hasn't really changed in anything that can be described as bold.

It's just an exciting thought for me. I by default dislike keeping my thoughts in that of the pack and I like to consider scenarios outside of the expected. And I like this scenario. It seems plausible and I want it to happen. There's room for only two. But when the third keeps nudging to get room the dominant two will fight to innovate. And that's what apple did to get on the bench. Followed by Google continuing to shove Microsoft off the bench. Followed by Microsoft making some pretty bold decisions to try and shove Apple off the bench in return. It's exciting and my money is behind Microsoft and Google.