I ran across an interesting issue tonight doing a server upgrade. I built a new cc1.4xlarge instance at Amazon AWS to replace the older instance that powered our webserver roles. The new server ran Ubuntu 12.04, while the old one was on 10.04. After upgrading I noticed a few scripts failing to make an SSL handshake (my script through curl connecting to an https endpoint). What was weird is some SSL endpoints worked fine while others crashed with this error:
error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
I had no idea what to make of it, but some quick google research brought up a few interesting issues with the new openssl libraries in Ubuntu and apparently a new cipher byte restriction causing some weird issues. You can read more about the problems others are facing here:
Well, according to the bug reports which are marked as fixed, our servers should be fine. I checked and we are using the newest openssl package for Ubuntu: 1.0.1-4ubuntu5.3. This was a do or die situation. I HAD to figure this out or rollback the new server upgrade to the old OS.
The good news is I fixed it in our PHP scripts and here’s what I did:
curl_setopt($ch, CURLOPT_SSLVERSION, 1);
I noticed a few comments how if you forced SSL version 1 on the command line everything would work. I took that advice and ported it to the PHP/Curl version and everything works fine for now. Hopefully the issues get resolved in the future and I can remove the hack.
For those that don’t know, Digg was recently purchased by a company called Betaworks. The team from News.me is looking to breathe new life into Digg and hopefully help it reclaim some of it’s former glory. Here are my thoughts on the direction they should go.
Digg lost its way when it started caring about what Facebook and Twitter were evolving into. They are for sharing with friends, Digg was for great news stories.
If you show me stories my friends are interested in, guess what? I’m not interested in using your service. I’m really the only one I know in my group that focuses on stories in the Tech space. For instance, I am deeply interested in blog posts about upcoming MySQL developments, but no one on my friend list at FB is.
Reddit makes this so easy and there isn’t a FB or Twitter link to be found on that site. Rather, you are free to subscribe and unsubscribe from user generated communities around specific themes. Look to riff on that model. Algos that look at your Dugg stories and automatically start recommending new communities to follow maybe?
On Reddit, you really need to dig to find interesting subreddits to follow. This process could really be made better by having a sidebar of community recommendations based on something like the keyword density analysis of the 10 most recent stories you upvoted. To the new team, help me find niches that would be almost impossible to find by other means and I’ll love you forever.
I wish the new team the best of luck in their endeavor.
Everyone that knows me pretty much knows my disdain for all things Internet Explorer. That program is an affront to web development everywhere and tries to stifle web innovation every chance it gets.
This class definition should give you an idea of how much I truly hate the blight known as Internet Explorer:
Ok, I have a suggestion about Digg v4. Digg is probably are receiving a lot of these right now so I’ll keep it brief.
I like the idea of “following” other people/sources/etc. However, like many lazy web visitors, I hate the idea of the act browsing/finding/following. I’m thinking to myself, “Self, Digg already knows stories I have dugg, commented on, buried, so why can’t they just automatically populate my follow list?” You guys already know the “persona” of every one in the Digg community. I would vote to have backend algos automatically build my follow list and I don’t have to see who it is or even care. I just know that without ANY manual intervention from me, the “My News” page always delivers me news I want.
Best of luck guys.
P.S. I have always wanted to change my Digg handle from cenarta to mikeytag, but we have never had the ability too? Any possibility of doing that in the future? I am guessing that Digg is using the actual username as a primary key and not an id of some sort creating this impossibility.
My wife and I took a quick trip to Santa Barbara and had a wonderful time staying at Fess Parker’s DoubleTree Resort. (The same place we got married). One of the “benefits” of staying here, and many other hotels around the country, is that you get a free copy of USA Today or some other newspaper.
The first morning we opened our door to leave and saw the paper sitting there on the ground in front of our door. I turned to my wife and asked “What is that?” Now, of course I know what a newspaper is and have even read them way back in the past. However, the fact that I have gotten to this point leads me to believe that there are other people out there that think the same way I do.
I generally see a traditional newspaper as something I would never want to seek news from. Why? Well I currently get an incredible assortment of news online and guess what? I get to choose exactly what type of news I want to read. Trust me, there isn’t a “one size fits all” paper in the world that would report on the things that interest me. I have a feeling that USA Today running Linux implementation stories and the latest gossip on various open source projects wouldn’t sell too well. However, the power of the internet lets me fully customize my information experience.
Also, why on earth would I want to limit the news I read to just one perspective? I usually find competing perspectives on the same news story through things like Google News and can help shape my own thoughts on the subject in a much more rational way.
Armed with my iPhone and laptop, I see no reason to seek out news in traditional media anymore. I will occasionally watch news coverage on TV for things like a “State of the Union” address, but find that when I watch a regular newscast I am completely disinterested in the many things they claim and spoon feed as “news”.
Every time I hear Murdoch or some other media dinosaur whine about Google, I realize how screwed these old business guys really are. Instead of complaining, they should be putting their resources into figuring out new revenue models online. My guess it will be some form of advertising, and many new startups are breaking new ground here while the old media companies simply grieve over their dying business model. I for one am finding that my own internet experience is actually being enhanced by new forms of advertising. Facebook is one of these great examples. Many times the ads catch my eye because they seem tailor fit to my interests. The media companies have a wonderful amount of statistics and data that they could use to really push the envelope here, but instead they just throw it away.
I’m sure the radio execs weren’t too happy when television really took off as an advertising medium, but terrestrial radio is still around. It’s about learning to adapt to new media not complaining about it. The internet is quickly becoming the great equalizer for all information. With its low barrier to entry and quick to market strengths you would think that these companies would be all over it. Rather, we are beginning to find that it will soon be all over for them.
Well, we finally got approved for the MSN Adcenter API. Boy has it been a long time coming. In addition, the Yahoo/MS deal just got approved and we finally heard back from our MSN rep on what is to become of, what is now, two separate search marketing backends.
We heard that Yahoo Search Marketing and it’s API will be phased out and all advertisers will use Adcenter. Timeframe? Sometime in early 2011 the change is supposed to happen. I am not sure what impact this will have on other advertisers, but it will make us just work that much harder at making our Adcenter API integration rock solid as it is going to have to support both engines in the near future.
Keep in mind that this “news” is through the grapevine and who knows how well communication of future business developments gets disseminated throughout the MSN machine.
Here is my goto profile photo for several months now. Kara and I having fun at the dog beach!
Do the people enacting this law have any idea how the web operates? Or what kind of enforcement strategy they would even use? What if the server is outside Europe but the visitor is inside? Do they know that every major browser today allows you to do this already?
Here is a humorous comment by Information Society CommissionerViviane Reding:
"In the E-Privacy Directive it is made very clear that a user can only give out his private data if there is prior consent so if there are spy cookies there must be a prior consent of the user, very clearly so. But there are also the so-called technical cookies, those which make the whole infrastructure of the internet function. Those are not concerned by this rule, just to clarify, because there were some critics that this amendment would make it impossible for the internet to function. It does not, it is a guarantee for the rights of the consumers."
Sorry Viviane, but the distinction between “technical cookies” and “spy cookies”, as you refer to them, is virtually impossible to separate. For instance, we employ multivariate testing on a variety of our websites to make sure that the best combination of a webpage is shown to certain types of users. The cookies involved are clearly used for tracking purposes, but without them our pages won’t even load. Not to mention that if we increase our “conversion” we are actually creating a better web experience for all of the visitors to our page.
Ok, I finally made the plunge at the office and am moving all of our workstations over to a system where the /home directory is shared via GlusterFS (www.gluster.org) and I use NIS for central authentication. Everything works beautifully, but today I came upon a very strange issue related to Firefox 3. It turns out the FF is known to have issues with shared /home directories since they moved to SQLite. I found bugs reported with NFS, AFS, GlusterFS, etc.
Here is my down and dirty fix. Basically a script on login sets up a /tmp/firefox-$USER folder and symlinks ~/.mozilla/firefox there. On logout another script copies everything in /tmp back to a folder at ~/.mozilla/firefox-sync. I am using this on Ubuntu Jaunty and these instructions are for it.
Step 1: sudo nano /etc/gdm/PostLogin/Default
# move the .mozilla/firefox directory if there
if [ ! -L "$HOME/.mozilla/firefox" ]; then
if [ -d "$HOME/.mozilla/firefox" ]; then
mv "$HOME/.mozilla/firefox" "$HOME/.mozilla/firefox-sync"
if [ ! -d "/tmp/firefox-$USER" ]; then
rm -rdf "/tmp/firefox-$USER/"
chown -R $USER:$USER /tmp/firefox-$USER
# copy the users files over
cp -rpdf $HOME/.mozilla/firefox-sync/* /tmp/firefox-$USER/
# create the link
if [ ! -L "$HOME/.mozilla/firefox" ]; then
ln -s "/tmp/firefox-$USER" "$HOME/.mozilla/firefox"
Step 2: sudo chmod 755 /etc/gdm/PostLogin/Default
Step 3: sudo nano /etc/gdm/PostSession/Default
#this moves all the data out of tmp back to the firefox.sync folder
if [ -d "/tmp/firefox-$USER" ]; then
rsync -a --delete /tmp/firefox-$USER/ $HOME/.mozilla/firefox-sync/
Step 4: sudo chmod 755 /etc/gdm/PostSession/Default
I hope this helps someone out there that is trying to remedy this behavior on their network