New App - The Grim Tweeper, easy way to clean up followers on Twitter

So the night I spent learning the Twitter API bore fruit! After getting some familiar help from Tess Rinearson (legendary still-in-high-school designer), Dan Shipper, Ajay and I finished another app - The Grim Tweeper. This app was a very simple concept that Ajay Mehta had dreamed up a couple weeks ago, we got busy with other stuff, and finally decided to code up and ship now. We haven't posted to hacker news yet (will do when/if I wake up in the morning) but we believe this has some pretty cool viral potential. The only thing we are thinking might be an issue is the Twitter API rate limit. Each person has about a 350 requests/hr limit, and we make about 3 calls per a person you cycle through in this app which adds up to about 117 people each person will get to play around with until the app will throw an error. We believe people may not get to that mark, but if they do we handled the error and put up a nice error message letting you know what happened and apologizing. Will keep you guys updated if this app takes off, which I (believe and) hope it will.

Working with the Twitter API (until 7AM)

After working with Facebook API extensively, the other night I decided it was finally to learn how to connect via Twitter API (little did I know it would keep my up until 7AM). Just a few words on my overall experience that night: it was pretty difficult to understand at first and there was a huge learning curve for me in the beginning, but as soon as I 'got it' it was pretty easy. The biggest reason why I got it was because of Jaisen Mathai. So I started off going straight to Twitter's Dev site and looked right through their tutorial. I thought immediately after reading that it might be a little over my head. They talked a lot about Oauth tokens and the flow between client/server/Twitter authentication and I will admit I was a little lost and discouraged... but I realized Google exists! So next thing I did was Google through 'how to php Twitter API' and opened the first couple results into tabs on Chrome. I skimmed over them all and noticed they all used a little Twitter php class package put together by Jaisen Mathai. I thought, okay seems pretty standard let me download that. I immediately downloaded it from a web site that was NOT Jaisen's because I thought the instructions were a little more clear (sort of a mistake, I will explain), I uploaded onto a server, and started looking at the code. I read through the tutorial and looked through the code and found it was pretty straightforward. As soon as I started playing with it though and customizing it for a simple app I wanted to make, I kept running into issues.
<?php
$twitterObj = new EpiTwitter($consumer_key,$consumer_secret);
$url = $twitterObj->getAuthorizationUrl();
?>
That should have given me a $url with the proper oauth token to go back to Twitter and log a user in to my specific website. However it kept returning a url with the proper url minus the oauth token... I dug into the code and first thing I noticed was in the EpiTwitter class some of the urls were out of date. I tried to change the class variables into the correct ones, then I ran the code again and same issue.. The next thing I did was try to ask @jmathai and hoped that he'd get back to me. He actually responded really quickly saying to email him the question. So I did and while I waited for a response I kept looking online to see if the answer was there. All my search results basically linked back to his page so I decided...you know maybe it was wise to look at what he had to say about his own stuff. I didn't notice anything different until I stumbled upon his Github link to the project. I finally realized that, woops, here was my mistake. His Github contained the most updated code and so I downloaded from there and re-uploaded the library. Boom, baby. It worked. I had gotten a very basic version of logging-in via Twitter up on the server and now it was time to play. Of course also I tweeted back at Jaisen and let him know it was just my mistake of getting old files and he was kind about it. The next thing I did was read through all the documentation of the new classes and learn how it worked. It took a couple tries and it wasn't easy at first. But as soon as I learned the structure of the GET and POST requests it became simply a matter of knowing what Twitter methods were available via the API and how to call them. By the time I became familiar everything it was probably 4AM, so I spent the remainder of the night just coding a basic functioning version of the app I wanted to make (details will be released with the app later this weekend, hopefully tonight). It was a great experience and now I know how to deal with the Twitter API! If anyone has any questions about this or needs help getting started, let me know. I feel like I have learned enough to be helpful enough at this point.

Blood, sweat, and tears - Mashable certainly tested our resolve

[caption id="attachment_101" align="aligncenter" width="745" caption="source: my heart and soul"] [/caption] Boy was it a rough couple night... but a great one which I will remember forever. Feb 24 12:15AM - On my way to start doing my homework. As soon as I sit down in the study room, Dan Shipper calls me "We're on Mashable!" I'm stunned, don't know what to say and he tells me to come back to the dorms so we can work. I run over as fast as I can. 12:30AM - I'm back in the quad done dancing around in glee with Dan. We realize, we need to get cracking. Our servers at GoDaddy are getting hit hard. We thought it would handle it, but clearly the virtual dedicatd server was not enough. Dan called up Rackspace, got a Cloud Site squared away, and we began to migrate. 1:00AM - We decide it's worth it to take down the page rather than letting it load poorly for people. So we put up a landing page, and let the migration begin. We finally got a hold of Ajay (where the hell was he?). Ajay helps us wit PR. 1:30AM ish - Our cloud site was up and running! Dan and I were so happy! This was one of our emotional highs. Everything was working smoothly, we were getting hit with traffic at unimaginable loads, but we were handling. We though smooth sailing. So we just thought we could spruce things up a bit, fix a bug here and there, calm down the fire of people that had seen the site down and go to bed. We were wrong. 1:30AM - 4:30AM - Happily responding to tweets, facebook comments, Mashable comments while watching our traffic and user base grow. Through th night we saw user benchmarks hit 1K, then 2K, then 3K, 4K, and by the end of the night (er early morning) we see 5K! Also, Dan and I are busy fixing bugs and making features more robust for users. 5:30AM - Ajay heads to bed. As soon as this happens, Dan and I sort of realize, something is wrong. Things are slowing down significantly for loading the profile page. We expect that it must be something to do with our database because the code was basically unchanged through the night and traffic was consistently high so that was not the big difference. The only thing we could think of was something going on with more and more data being logged into our database. I mean, this was good, but it also hurt our speed - so we though. 6:30AM - We continue to look through our code to see if there are any MySQL calls that are just looping or something extremely inefficient. We catch a couple small things but don't see anything that could really make that much of a difference. We see some calls taking as long as 30s from the webpage, when earlier that evening it took only 2-3s...what is going ON??? 7:00AM - We decide to try logging the times it takes to go through each function that was called to see if it really was our SQL database. 8:00AM - Call Rackspace again, finally they discover it is not a CPU or RAM deficiency from our Cloud Site server but is instead long queues to our database because our site is data-call heavy. We continue to look through our scripts because they claim there is nothing we can do but to wait and let the queues clear out. 9:00AM - We have to head to class and so we need to make a choice: keep WhereMyFriends.Be up, or take a down. We had to choose whether we wanted to leave a sub-par, slow product up that would probably still pick up traffic, or take it down and possibly close off this window of opportunity for virality from Mashable. We eventually chose to take it down. To me, the choice was simple because you never release a product you know not to be the highest of quality. We didn't need artificial growth/hits, but instead wanted to make sure people knew we only produce great stuff. So we ended up using MailChimp to put up a nice apology letter, with a chance for them to sign up and see the product when it was up and running again. Then I had to start the day...In between classes and such I was running around checking to make sure our landing page didn't crash (I figured it wouldn't but...who knows sometimes) and seeing what I could do to make the SQL queries faster. I immediately got some great advice, thought of how to implement it, and started doing some of the coding while sitting around waiting for class to begin. I also made sure to help cool the fire online with all the hits going to our site, but nowhere to go from there.  Then at about 12PM Dan tells me we got on CNN! We were down at the time but it was still exciting! But now there were more angry/confused comments to respond to and quel. After finishing some homework that was due the next morning, I met up with Dan again to start coding. I was introduced to a new method of parsing the friends. Before we were sending off 20 friends at a time to get back 20 (or however many revealed their location) locations. I would figure out if those friends were in our database or not, then I would send them on different paths there. In my new implementation, after getting some advice, I parsed the friend list immediately into what-is-in-our-sql-tables, and what-do-I-need-from-FB. That made a big difference in terms of time, and along with other tweeks we thought our code was solid. We were just waiting on Rackspace to migrate us over from Cloud Sites to more powerful and scalable, Cloud Servers. After we did that, migrated over, things seemed to work fine. Mind you it was around 4AM at this point. We had run into several issues along the way, such as DNS mis-pointing, private/public IP misdirecting, SQL fails, and every other migrating issue you can think of. Surprisingly though, the new untested code that parsed the friends data in a different way and sent them on unique tracks worked with almost no hiccups! I was pretty proud. At 5:30AM, we figured everything was pretty much in the clear. We did more cleaning up, some more bug testing and phew. Up and running. Sleep. Wake up, things are good. Go to class, get out of class. Boom. Down again. I had a meeting to go to, and Dan said he was taking care of it so I trusted him to get it fixed. Rackspace told us, there really wasn't much they could do, so we put up the landing page again. FAIL. Jeeze I was mad... So now for the past several hours, Dan and I have been implementing memcache and seeing what else we can do to make things more robust and lightweight. I think we're pretty much done, and just have a few small things we need to tweak. But we should be ready to release tonight. By the way, Dan is in NYC tonight at a concert so if sh** hits the wall...I'm all alone. This project has probably taken up literally half of my life this week. But it's been worth it. Through all the good, and bad, and worse, and better, and best, and etc etc, I've learned a lot and know a lot about what to look out for and do next time I strike gold again (which hopefully is very soon).