tag:labs.thredup.com,2013:/posts thredUP Labs 2017-02-15T20:09:37Z thredUP Engineering tag:labs.thredup.com,2013:Post/941595 2015-12-01T13:42:46Z 2017-02-15T20:09:37Z Retaining Engineering Talent

by Dan DeMeyere - @dandemeyere

Retaining engineering talent has become an increasingly challenging task for any engineering manager in the Bay Area. Engineers have a lot of power in the current market and, as a result, it’s easy for engineers to leave and find a new position elsewhere. Hiring and onboarding new engineers are time consuming and expensive. A team that can mitigate turnover is a more productive and efficient team.

thredUP has been very fortunate regarding engineering retention. Our largest team, Web Engineering, has twelve engineers. In the past 3.5 years only one person has left in order to pursue a passion project after working with us for over five years. While luck may play a role, much of the credit for our great retention goes to the team. The culture they have built, the thoughtfulness that goes into the hiring process, and an emphasis on peer mentoring has helped keep engineers at thredUP. Even so, retention is still something I have to constantly focus on as an engineering manager.

For me, the first step in retention is identifying the motivating factors for every engineer. Is the engineer someone who loves the company and our mission? Is the engineer motivated by what they do professionally and the projects they work on? Is the engineer primarily motivated by money? These motivators are not mutually exclusive and therefore every engineer should be managed in a way that is tailored to their specific mix of motivators.

Next, I try to develop a mutually beneficial relationship that aligns the company’s objectives with the engineer’s desired career path. At thredUP we call this relationship “professional development”, and it has been a big priority for us as a company. It’s the reason why I’m in the position I’m in today and it plays an important role in our ability to retain talent.

As my manager does with me, I meet with my engineers frequently to discuss professional development. To facilitate opportunities for personal and professional growth for engineers while delivering on the company’s objectives requires a thorough understanding of every engineer's desired career path. This is why communication is key to retention because it ensures that the needs of both the engineer and the company are met.

Once I know what kind of opportunities will motivate engineers, we work together to build a roadmap. This step can be a challenging endeavor and it requires thoughtful planning, but thankfully there's a great book, The Alliance, that provides a framework for how to structure these professional development roadmaps.

The Alliance identifies different “tours of duty” an employee/employer can embark on based on the motivators of the employee. For example, an early employee or engineer that identifies personally with the company's mission will be well suited for a "foundational tour". This is a long-term tour focused on improving an employee's ability to provide maximum value for the company.

If an engineer is more motivated by following a desired career path, for example becoming an engineering manager or a technical lead, then they would be more suited to the “transformational tour”. In these cases, developing a roadmap is as simple as determining where the engineer is now, where they want to go, and what milestones lie in between. Once you attach timelines to each milestone, the relationship for this tour becomes a transparent series of check-ins discussing where they are at in the current milestone and how I can help them get to the next one.

Of all of the tours listed in The Alliance, the “rotational tour” is the one I use most often. Engineers looking to learn new technologies, solve difficult problems, and deepen their technical skill-sets tend to thrive when they’re able to rotate through big projects that expose them to the technical opportunities they’re looking for. The hardest part of this type of tour is aligning projects that serve both the company's objectives and engineer's professional desires. I work very closely with our product team (feature roadmap) and our CTO (technical roadmap) to discover and align these opportunities as early in the process as possible.

Sometimes opportunities for engineers fall into place effortlessly and other times you have to work hard to facilitate the opportunities engineers are looking for. At the end of the day, the more effort and energy you invest in professional development, the more the team will yield in terms of retention, motivation, and job satisfaction.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/941232 2015-04-21T16:00:00Z 2017-02-09T22:38:54Z Asynchronous Slack Scrum

by Dan DeMeyere - @dandemeyere

For the past four years, we've done scrum the same way. Standing up at 9am sharp.

Our scrum is straightforward. What were you working on yesterday? What are you working on today? Are there any changes to plans or intersecting efforts that should be discussed?

When we were small, this worked great. As we grew, we needed to tighten up our scrum updates to keep scrum an efficient use of everyone's time. Then our web and mobile teams started participating in a combined scrum so we had to make our updates even more concise. Around this time we also made a decision to save any tangents or questions for after scrum to make sure we weren't wasting a large portion of the team's time while two engineers went into the weeds on something.

At fifteen engineers, we were still under 10 minutes, but one engineer keenly asked recently:

Everyone just gives a brief, high-level update of what they're working on, but we all know what projects everyone is on. Isn't the point to surface those issues so we can go into the weeds? How am I supposed to remember my questions I want to follow-up on while I'm supposed to be paying attention to everyone else's updates? At this point, what is the value add of scrum?

These questions made it obvious that our scrum was no longer serving its intended purpose. It was hard to believe that a process we did every day for so long had been broken and we didn't know about it.

So we ran a little experiment. We created a new Slack room for scrum. Every day, before or at 9am, everyone posted their scrum update. The new guidelines were to not worry about brevity or being too deep in the weeds. Scrum updates are now opt-in in terms of who reads what so you won't waste another engineers' time with specificity. Say what needs to be said to inform teammates with what they need to know about your efforts for your project.

We ran the experiment for a week and voted anonymously during our next sprint retrospective whether we wanted to continue doing scrum in Slack. The vote was unanimously in favor of Slack scrum. The purpose of scrum was restored and the team was in full support of the process change.

In addition to fixing scrum, we also discovered that our new Slack-based scrum yielded new value for the team.

Evergreen

When someone misses stand-up scrum or someone is on vacation, there's no way for them to know what everyone said during scrum. With Slack and its evergreen nature, you can now go on vacation for a month and be able to catch up on what the team has been up to in a matter of minutes.

Having this visible history of scrum also results in increased accountability. When you get ready to post your scrum update, yesterday's update is still in view. It can act as a helpful prioritization tool to see what you said you would accomplish the previous day when you're getting ready to prioritize your efforts for your upcoming day.

Working Asynchronously

Stand-up scrum is synchronous in nature — only one scrum update or conversation is occurring at any one point in time. While some companies like GitHub intentionally gear their processes to be asynchronous, we sort of fell into it serendipitously with our new scrum format.

Having an asynchronous scrum has multiple advantages. For one, we no longer have a forced interruption every morning. If someone wants to come in early and get a head start, they can post their scrum update and check back to see the rest of the team's update once they come up for a breather.

If someone posts a scrum update that you want to follow up on, you don't have to worry about de-railing stand-up scrum with a question. You can spin up a separate Slack thread and dive into the details without stopping scrum or wasting anyone else's time.

To play devil's advocate, there are some downsides to asynchronous Slack scrum. For one, if you're not on a project with someone, it's possible to go days without face-to-face interaction with them. Over time I could see this being a morale issue. Stand-up scrum can also be a nice kickstart energy-wise to the day so this is something we'll be keeping a close eye on as well.

Revisiting our agile processes is something we do regularly through sprint retrospective. Typically the catalyst for process change is an inefficiency surfaced by the team, which was the case for scrum, but we'll be taking a new angle on this in the future. How can Slack and other technologies that have become engrained into the way we work help evolve our processes to be more efficient and yield more value?

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/830636 2015-03-26T03:06:53Z 2015-03-26T03:08:28Z A CTO's Notes & Musings From F8

By Chris Homer - @chrishomer


I was at F8 today and thought I might as well flush out my notes and post them here instead of buried in evernote.  

Overall, solid day at a conference.  My general take on it is that it is more of a "Product Manager's" conference than a developer's conference.  Still very interesting but no hackathon, no labs, little code level talk in sessions.  Of course at an early stage company, the lines blur and even at thredUP there are aspects of my job that fit closer with Product than engineering.  That said, the announcements and topics were thought provoking and what follows are my notes built out with some additional "musings".

Keynote

Zuck kicked things off with a review of the last year and some announcement "hints" before handing it off to others to talk Messenger & Parse.  The most impactful part of his talk was his discussion about bugs.  Bugs are something that every startup has to deal with.  Moving fast enough to "figure it out" almost always means you leave a trail of debris behind you.  Sometimes you can clean it up, sometimes you can't.  Until the last few years, Facebook has allowed many bugs to fester.  Last year they committed to reaching a 48 hour resolution on all "major bugs".  They hit that.  This year they are committing to resolving 90% of ALL BUGS within 30 days.  This is great to hear and quite inspiring.  One of the crucial aspects of this commitment is their willingness to be completely open about their progress against this goal.  Their system reports the status publicly and I love it.  Wondering how we can work towards the same at thredUP.  Not necessarily publicly, but fully visible to everyone at the company and something we are much more conscious of and held to. 

The Celebrity Sighting

While over checking out the Oculus demo area, I turned to my left and there was a "posse" plowing past me towards the "teleport" demo.  At the head of the pack was the 5' 9" curly haired mogul (zuck) with about 25 people in tow.  I was struck by how it reminded me of a similar scene when I was at Microsoft and Bill Gates showed up at six flags during a company party.  Pretty entertaining to watch actually, but it was also instructive - his poise and confidence was palpable.  He was CLEARLY in charge and had everyone hanging on each word and move.  

Session 1: Parse

I have been spending a lot of time thinking about our systems and our architecture at thredUP.  The path we are on is generally towards a micro-services infrastructure but it's a long-term evolution.  Something I like to ask myself is, "what are the alternatives we haven't considered?".  Parse seems to provide a new alternative.  While I could never see our entire company and app on parse, I could certainly see how 90+% of the shopping experience could be driven through a backend built on Parse.  Today it's a Ruby on Rails app/API slowly moving towards a more thoughtful API structure.  However, were I to embark on the project today, I would definitely consider Parse for a good chunk.  Turning over server infrastructure and much of the boilerplate backend is appealing as it focuses your attention on the value-add for your application rather than reinventing what Facebook/Parse has solved.

The announcements from Parse included better Internet of Things support, some developer tools updates and integration into React.  The React integration was the most intriguing to me.  At thredUP, we are investing serious time and energy in planning a rebuild of our Ecommerce Customer Experience.  For the web application, we have chosen to go with NodeJS & React.  What Parse announced today is a plugin for React so that instead of building out the full Flux pattern, you delegate the dispatcher and stores to the plugin and Parse.  At first it seemed to violate everything that I love about Flux.  But then I began to see the simplification it introduced and I think it's worth a look.  If nothing else, it probably provides a good example of a way to abstract away the boilerplate of Flux.  Reflux, Fluxible and others do this too, but none are simple and they are in fact very complex.

Session 2: Facebook Login & Graph Updates

This session was interesting but only a few things of note from my perspective:

Direct Support - Facebook is introducing a 1:1 chat support service with developers on staff.  They have opened this up to anyone at F8 with 6 months free.  Going to try this out.

Login Manager - With the updates to the SDK comes a new component that seems to abstract away some of the complexity of dealing with sessions and authentication in iOS and Android apps.  Pretty interesting an worthy of time.  

user_posts permission - Looks like you will be able to request permission to dig into a users news feed and related graph nodes/edges.  Pretty interesting and powerful.  I could definitely see value to crunching it for things like sentiment analysis, pro-active CS engagement, and interests & recommendations.

Providing a Great Login Experience - this was the most valuable topic I thought.  They talked about the practice of "requiring" permissions and how it sours the user experience.  Some may be truly required, but usually they are not.  They also talked about how low the "decline" rates are with permissions and how it's not worth hard requires that block the user from proceeding.  The framework they discussed for designing a login flow was "Value, Respect, Reliable".  Value refers to communicating the value of sharing your information and they used Shazam as the example.  Respect refers to being respectful of you users' right to privacy and allowing them an experience while they "get to know" your app/company/service.  Reliability refers to the testing and thorough work put into your flow - if it's not bug-free, intuitive and fast, it's a big problem.  None of this is rocket science, but was a good reminder that this is the first impression you have with new users, make it count.

Session 3: Facebook Messenger

The session itself was a little disappointing. I was expecting more out of it and it was just kind of "ok".  There were some promising nuggets and food for thought in the broader announcements around messenger as a platform and messenger for business.

Messenger as a platform - the big thing here is sharing through messenger and allowing a conversation to take place from your app, through messenger and then back to your app.  I am not sure what this means for thredUP yet but some thoughts come to mind around "shopping with your friends", "building outfits for your friends", "gift lists" etc.

Messenger for business - this is the bigger opportunity I think.  It's early days but the gist of it is that a user could ask that we send a receipt to messenger for them.  This would deliver their receipt to their phone or to their inbox on facebook.  The user could then reply right there to ask questions or get help.  In addition, we could provide real-time fulfillment and shipping updates along the way and possibly ask for feedback post purchase.  They are currently focused on the after-sale experience but I would love to explore integrating with them in pre-sales for live onsite chat and maybe even co-browsing assistance.

Oculus

No real business application here yet - unless someone wants to hack on a 3D immersive shopping experience with Unity :)

But, the new demo - codenamed "Crescent Bay" was AMAZING.  I would love to set that up in the office in one of our conference rooms.  It was so completely immersive and it almost felt real.  There were times where I wanted to reach out and touch what I saw and in one scene with bullets, explosions and shrapnel flying in slow motion, I felt like I was in the matrix and needed to start dodging bullets.  Can't wait for it's release.

 

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/746345 2014-09-25T17:52:47Z 2017-02-09T22:38:56Z Responsys API Ruby Gem

by Dan DeMeyere, Morgan Griggs, and Florian Lorrain

In the past year, thredUP's email marketing team has elevated their game to the next level. While they increased the frequency of their campaign cadence, they actually sent less emails per customer as a result of careful audience segmentation and campaign personalization. To segment and personalize in a way that would make our emails resonate more with our customers, our email marketing team had to analyze customer data via Looker (our business intelligence tool) and then upload lists for individual campaigns, which was a laborious and tedious process. This is something that we hoped our current email service provider (ESP) would have been capable of, but after more than a year of doing workarounds like this, our team arrived at the conclusion that we needed to find an ESP that empowered us instead of getting in our way.

Our CTO led discussions with a number of ESPs and ultimately decided on Oracle's Responsys solution. After the contract was signed, the integration project came down the pike to our web engineering team.

Our first step was getting to know our current ESP integration inside and out. Anyone who has completed an ESP migration before (this wasn't our first ESP migration rodeo at thredUP) knows that you can't just flip one ESP switch off and another ESP switch on. To warm up the new ESP's IP addresses, you have to integrate the new ESP into your existing system and ensure data is reconciled across all platforms. For example, if someone unsubscribes on your existing ESP, you need to update your web application's records as well as your new ESP's records to prevent sending someone an email in one system that already unsubscribed in a different system.

Once we had a feel for the existing endpoints, flows, and system dependencies, we had the necessary context for the next phase of the project, which was the research phase. Before doing a deep dive, we did a high level walkthrough to understand how we would be interacting with Responsys (ex: JSON? XML? SOAP?), how Responsys differed from our existing ESP and what terminology we should formalize to avoid confusion while discussing the Responsys API.

Responsys has a robust set of functionalities and along with it, they have a robust set of documentation. Since our team for the project was relatively small (3 engineers), we decided to divide and conquer. We divvied up the major components (profile extensions, list management, etc.) such that the team in aggregate had a thorough understanding of every major component. When someone felt they fully understood one of the components they were responsible for, they would do a mini presentation to the other two engineers. After 3 days (1 day for high-level, 2 days for deep dive), we were ready to start breaking down the project so we could get to work.

At this point, we did what most Rails engineers would do:

Every link was a dead end. Gems that didn't work anymore. Gems that were abandoned. We went multiple pages deep sifting through results and we tried every gem or library we could get our hands on. Nothing worked. With no viable gem options, we decided it was time for an architecture planning session.

Do we build our own Responsys gem? Do we create a self-contained layer in our app that talks directly to the Responsys API? Because our current ESP contract was expiring in less than a month, our project had a deadline. If we didn't complete the integration by the deadline, we would have to re-up our contract with our existing ESP and unfortunately they don't do month-to-month.

We couldn't afford to go down one road for a week or two and turn around once we discovered it was the wrong choice. We knew building the gem was the right way, but we didn't know if we could pull it off in time. We decided to have one person prototype the gem and one person prototype the non-abstracted internal approach. We time boxed the prototyping to one day and scheduled time to huddle the next morning.

The next morning we reviewed both prototypes and while the internal approach was further along, the gem was at a point where it was a viable option. We knew the gem would be more work and we knew we would have to be more thoughtful with decisions to ensure the gem was abstracted to the point where anyone could use it for their company, but we decided to bite the bullet and go for it. 

We spent the next day or two deciding on how to organize the gem code, what naming conventions we should use (we decided to use the same terminology in the Responsys documentation) and what gem functionality would be required to complete the migration project. Until our project was complete, we wanted to focus our efforts on gem functionality that our web app required.

Once we identified the critical path for the project, we made sure the gem was always one to two steps ahead of the web app's integration. We huddled briefly every day to give updates and determine whether we needed to shift the team's resources around so that we could always be going full steam ahead on our web app's integration. 

I'm happy to say we completed the project on time and that deciding to build the gem was the right decision: https://github.com/dandemeyere/responsys-api

We still plan to add more functionality to it as well as tighten up the documentation, but we've started what we think is a great base for ourselves and others to build upon. We've already had the fine folks on the TaskRabbit engineering team open pull requests to help improve the gem and we hope this post will be a catalyst for others to join the project as well.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/641185 2014-01-13T03:46:06Z 2017-02-09T22:38:57Z Operations Engineering @ ThredUP

By Syed Usamah

At ThredUp, our engineers are broadly divided between three teams: Mobile, Web Engineering, and Operations Engineering. In this post, I am going to talk mainly about what we do here as operation engineers.

ThredUp is a very operations-centric business, and the Ops Engineering team plays the vital role of providing the systems, tools, and the analysis required to make the company’s operations more streamlined, automated, and scalable.

That was a mouthful, so let me explain a little more.

When I joined the company in mid 2012, our warehouse was run entirely on homegrown iOS apps. It was quite the sight; everyone was walking around doing stuff with iPhones, be it receiving bags, or taking photos and measurements of garments to put on our online store. For our size then, the approach was working incredibly well. But soon, we realized that to achieve scale and higher efficiency, we needed something more robust. Thus, we intensified our focus towards building new systems to improve our operational efficiency.

While doing so, we have not restricted ourselves to any particular platform or language. We knew from the start that to solve such a wide array of problems, we needed a diversified technology stack. The result: our bag receiving is still done using a custom iOS app, our garments are photographed using a .Net station, their images processed using C++ and Python libraries, and the branding and categorization is done using a slick Backbone.js UI with touchscreens (mind you, this is just the inbound side of operations.) That said, our backend is primarily in Ruby on Rails.

Another distinctive feature of Ops Engineering is that  we do not have project managers or business analysts in our team; we ourselves do the managing and analyzing. For me, this makes for a really valuable experience, and I believe it both enhances and complements our teams programming abilities.  On one hand, working with the stakeholders directly to understand business requirements results in better and more suitable features. On the other hand, understanding the technical ins and outs of our systems helps us in explaining the functionality, tradeoffs, and limitations better to the stakeholders.

So what do we do on a daily basis you may ask? I have no straight answer for this. We could be hiding in a room, chugging coffee and coding, or we could be training the ops team on a new feature we just deployed, or we could be analyzing critical metrics from the last quarter and figuring out how we can make further enhancements. We are also not an all-work-and-no-play kind of a team. Some of us go work out together, others go to Meetups and conferences together, and others go tour breweries and discover random bars in SF together.

By the way, we are hiring for a number of engineering positions. If interested, check out our job postings.


]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/632626 2013-12-19T22:36:19Z 2013-12-20T00:37:04Z Web Engineering at thredUP

by Dan DeMeyere - @dandemeyere

Every company has a different way of organizing and defining their engineering teams. At thredUP, we have three engineering teams. We have a Mobile Team that builds our native iOS/Android apps. We have an Operations Team that focuses on our distribution center's needs (getting our product online, fulfilling orders, handling returns, etc.). Lastly, we have our Web Team.

What is a Web Engineer?

A web engineer is an engineer that focuses on the user-facing, browser-based (desktop + mobile web) thredUP experience. From the moment someone types 'www.thredup.com' into their browser, until the moment they complete their checkout, they're in the web engineer's world. A world that consists of Rails/HAML/SASS/CoffeeScript, improving page load times, and measuring customer engagement metrics.

A web engineer operates differently than engineers on our other teams although we do have an over-arching engineering culture that encompasses all of us. The Web Team works in two week sprints and we have two different backlogs that we work out of. One is our merchandising backlog and the other is our growth & conversions backlog. Each backlog has someone that is responsible for that backlog's roadmap and a sub-team of web engineers responsible for delivering each sprint's projects.

Merchandising is focused on surfacing the best product to our customers as fast as possible. They work on scoring algorithms, improving free text search functionality, collaborative filtering, recommendations, and much more. Their goal is to create a shopping experience uniquely tailored to each customer based on what they have searched for but haven't purchased and what we can recommend to them based on what other customers with similar tastes have purchased.

Growth & conversions is always focused on a specific layer of our funnel. We consider getting new customers to the top of the funnel as the growth layer. Our conversion layers are getting customers to add a filter -> add to cart -> go to checkout -> complete checkout. Getting customers to become brand advocates and invite their friends is an example of something that affects both growth & conversions. All of our funnel's layers have clearly defined KPIs that we try to improve each sprint.

Including myself, there are currently 8 engineers on the Web Team - two are on the merchandising backlog and six are on the conversions backlog. We used to shuffle which engineers comprised each sub-team, but over time engineers gravitated towards the team they cared about the most after they became engrossed with that team's metrics/goals. We feel that if you care about what you're doing, you'll make a bigger impact so for the most part web engineers can decide which backlog they want to work out of.

What does a web engineer do?

Now you have some context to what a web engineer is and how they help the drive the product forward. The layman would assume that web engineers spend all their time cranking out code; however, web engineers are responsible for much more than that.

To take a step back and speak in very general terms, companies exist to serve the needs of the public and generate profits for the shareholders. For companies to exist for an extended period of time, they need to become profitable (there are obviously some exceptions here). Companies that are not profitable should be doing everything in the power to become profitable. At an eccomerce company, it takes time, research, and proper execution to close the gap to become profitable. Our executive, product, and marketing teams are tasked with providing the vision on how we're going to close that gap. The Web Team's role is to fulfill the needs of these teams so we can make their vision a reality.

In the ideal world, the more time web engineers spend doing development the faster we can become profitable. My responsibility is to work with the Web Team and put processes into place that maximize development time in a sustainable way. If the processes are not sustainable, engineers will burn-out, technical debt will be accrued, bugs will overwhelm the user experience, and the team will fall apart.

The processes we have put into place determine the composition of how a web engineer allocates their time. Here is a simplified list of responsibilities for web engineers:

  • Complete the development stories in each sprint.
  • Only push code that improves the overall quality of the code base.
  • Thoroughly review your peer's pull requests to reduce the number of bugs that are introduced into production.
  • Squash bugs as they come in or create a story with enough details so another engineer can squash it.
  • Handle escalated customer services tickets to be exposed to the problems our customers are encountering.
  • Use the product so that all development can be done within the context of the customer's needs.
  • Define and work towards professional development goals.
  • Mentor and/or teach other engineers to increase the technical competence of the team, transfer knowledge, and help your peers work towards their professional development goals.
  • Attend our engineering book club to further our growth as engineers.
  • Help evolve our engineering team culture.

It's a challenge to do all of these well, but no one said that engineering was an easy profession. At thredUP, we pride ourselves in hiring engineers that care about our product, our code, our processes, and each other as much we do. If you would like to join our team, we have two open positions:

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/632623 2013-12-19T22:34:29Z 2013-12-19T22:35:15Z Service-Oriented Rails Engineer

by Dan DeMeyere - @dandemeyere

thredUP's engineering team has reached the point where we're focusing on breaking apart our tightly coupled systems and moving them into service-oriented systems that can scale. It's a challenging and fun stage in our team's evolution and we're looking for the right engineer to help us make this happen.

We're looking for someone who has experience building SOA systems or at the very least has worked on a team that already has their services scaled out. We're looking for someone who becomes excited about separating responsibilities and likes to talk about the merits of different messaging patterns. We know you're out there.

Hiring is a two-way road. We need to be a fit for you too so let me tell you about our team. We care a lot about engineering team culture. The engineering team meets every two weeks to discuss and iterate on it. We have an engineering book club. We've been known to geek out on board games. We care about mentoring, professional development, code quality (yes we use CodeClimate), thorough testing, and reviewing code in a way that informs and educates the team to make us all better engineers. These are qualities our team is passionate about. If you're passionate about these too and you have the experience we're looking for, we'd love to hear from you.

More About Our Team

  • 2 Rails teams (Warehouse Operations & Web) comprised of 12 engineers. 1 Mobile team with 2 engineers (1 iOS, 1 Android).
  • Code reviews are done through pull requests on GitHub.
  • We primarily use MySQL, but also use Redis/Memcache/NoSQL when applicable.
  • AWS for our server needs.
  • Jenkins for our CI server (RSpec/Jasmine tests).
  • We use Campfire to communicate and post funny Gifs during the day.

Desired Skills & Experience

  • 2+ years of Rails experience either architecting or working within service-oriented architecture.
  • Believer in testing.
  • Experience with message queueing (ex: RabbitMQ). Preferred, not required.
  • Bachelors or above in Computer Science.

Compensation & Perks

  • Competitive salary and equity.
  • Work from home Wednesdays.
  • Health, dental, and vision insurance.
  • Apple LED Cinema display + computer of your choosing.
  • Stocked fridge + snack cabinets.
  • Office is located at 2nd & Market in downtown San Francisco (right next to Montgomery BART station).
]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/604994 2013-09-30T12:00:00Z 2017-02-09T22:40:25Z Integrating your rails app with Dropbox
By Chris Homer - @chrishomer

At thredUP, we recently started using dropbox to do more than just store and share files among employees.  We've now integrated it with a couple of our applications so that certain data sets get archived there.  We used to just send them by email if they were small enough or save them to a server and then download them manually.  Now, with dropbox, they get dropped in a shared folder and the people that need access, have it.  

Implementing this integration had a couple hiccups along the way and I want to share how we did it in this post.

First, we used the dropbox-sdk gem (https://www.dropbox.com/developers/core/sdks/ruby) to handle most of the heavy lifting but you still need to implement a workflow to gain authorization with dropbox.  

We wanted to be able to authorize a conneection and persist the session to the database so we could restore it and work with dropbox without having to reauthorize again.  To accomplish this, we wrote a model for storing a users authorization session information:

As you can see, it's a fairly compact class. Here's how to use it:

With this wrapper class it's really that simple.  You could always implement a simple scaffold for this class and customize it to handle the auth workflow through the web browser - we don't add accounts that often, so didn't take this extra step.

A few more client use tips:

Hope this helps you integrate with Dropbox in your own applications!

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241606 2012-07-30T14:11:00Z 2013-10-08T16:13:10Z Reviewing MailChimp's Transactional Email Service: Mandrill

by Dan DeMeyere - @dandemeyere

As soon as your website amasses a sizable amount of users - probably somewhere around 50,000 or so - you're going to require two different types of emailing services. One for mass email (emails sent to your entire user base) and one for transaction email (emails sent to individual users after an action such as placing an order).

Mass Emailing Service

The reason you need a separate service for mass emailing is because sending emails is both CPU-intensive and time consuming for your servers. The thredUP dev team made the switch to a 3rd-party service after we tried using our servers for sending out a promotional email to our first 100k users. The email took about 4-5 hours to complete and we even spun up extra Amazon EC2 instances to help fire the emails off. Considering the email was a 24-hour site-wide promotion, it created an awkward user experience as some users had more time to use the promotion than others. Additionally, when people started talking about the promotion on Facebook, some users started to write in wondering why they didn't receive the promotion. We knew at that point we had to switch to a service that scaled better than our own so we made the switch to MailChimp.

Services such as MailChimp or ConstantContact are built to send out hundreds of thousands of emails in a matter of minutes, which is exactly what we needed. I had used ConstantContact at a previous job and while it got the job done, it was expensive and outdated. For an example, if you go to ConstantContact's developer docs, here are the library wrappers they offer: ColdFusion, PHP, and C#. So if you built your website 10 years ago, their libraries are probably relevant (sorry PHP).

When you compare this to MailChimp, which has an entire mini-site dedicated to their API documentation , the decision for who to choose should be obvious. As it currently stands, the thredUP Rails app is integrated with MailChimp to add/remove users to our mailing list based on the email preferences they have chosen. If we send email through MailChimp or manually through our app, the user's email preference stay synced. This is just one of the many ways thredUP has integrated with MailChimp through their API.

Transactional Emailing Service

Similar to mass emailing, you can get away with sending transactional emails on your own for a while (just make sure you're doing background queuing as the sending of an email should never be reflected in a user's page response time). Since these emails are sent on the fly (i.e. moments after a user has placed an order) the hardware requirements are much less than mass emailing as the computational load of transactional emails is spread out across the day.

The reason you make the switch to a service that handles ad hoc emailing isn't because of a lack of hardware infrastructure (at least initially), it's because there are other factors at play once you're sending out thousands of emails a day to different domains (Gmail, Yahoo Mail, Hotmail, etc.). You have to start worrying about ISP monitoring, IP white/black labeling, and SPAM reports. These issues can allow an email provider like Hotmail to throttle or completely stop your website's emails from going through to the user's email inbox, which creates a lot of user experience problems (i.e. try resetting your password at any website without receiving email).

What if you're a data-driven company? Are you going to build an in-house email analytics suite? I'd hope not. For the reasons I mentioned and many more, it's better to pay someone better at delivering transactional email than you are. Initially, we made the switch to SendGrid. SendGrid was great to us. Good pricing, good metrics, good API, and much more. We used SendGrid for over a year, but it was lacking one component we desperately needed for our marketing team - online email editing.

Given that thredUP is a data-driven company, it shouldn't be a surprise that we run a lot of A/B tests. Testing different email subject lines, call-to-action buttons, and headers are all very easy to change....for developers. Developers quickly became a bottleneck for the marketing team to roll out changes/updates to emails. Even if there was just a typo that needed to be changed, someone from the marketing team would have to ask a developer to make the change in the code and deploy the changes before the typo was fixed. That process wasn't fast enough and no developer wants to be updating emails all day so we started looking for an email service that was transactional and provided an online interface for editing email templates. Enter Mandrill.

Mandrill

Mandrill is MailChimp's new transactional email service. It's awesome.

I'm a big fan of Mandrill for a number of reasons. First, let's start with their online email editor. Through their interface, you can view all the emails you have put in their system, edit/delete/publish any email, and add new ones at will. This allows non-developers to go in and make text changes and if they have HTML experience, they can edit the entire email.

If you're wondering about dynamic information in an email (user's name, order totals, etc.), don't worry as they've thought about that as well. When your app sends the API request to Mandrill, you pass in the dynamic data and they inject that information into the email for you. Suggestion for the Mandrill team: it would be nice if there was some sort of templating partial system. We have shared layouts for a bunch of our emails and when we want to change the design, we're forced to update every email for that layout.

Next up is their customer support - so far it's been stellar. This past Saturday I noticed a number of our system emails weren't going out and from our Mandrill dashboard I could tell it was because we exceeded our hourly quota. We made the mistake of queuing up an email to 80k of our users using their transactional system and since we're still new to Mandrill, that exceeded our hourly quota, which they have in place to ensure no one abuses their system. I contacted their customer support team to see if there was something they could do to make a one-time exception. Within 15 minutes they had already responded and temporarily removed our hourly limit to flush out our queue. It's re-assuring to know that in the event something goes wrong, their technical support team is accessible and able to take action.

If you have used or visited MailChimp, you know that their team prides themselves in building a well-designed product and Mandrill's design lives up to their high design standards. The Mandrill dashboard is fantastic. You're welcomed by a large chart of the metrics you care about (emails sent, delivered, bounced, rejected, opened, clicked-through etc.) as well as stats about how many emails you've sent this hour, what your quota is, and your app's email reputation. From there you can start drilling down into more detailed views. Here's the kicker, they have their own free iPhone and Android app as well:

I can't speak for the Android version, but iPhone's app design and UI is great. If you're like me, the more remote access you have to the vital stats of your app, the more you feel comfortable getting time away from your computer.

Lastly, their pricing is very affordable. If your website is small, there's a good chance you can integrate with Mandrill for free.

When I noticed they had a free tier, I integrated Mandrill into my website's contact page, which took all of 2 minutes to configure. I plan on doing a follow-up post on how to install Mandrill for your Rails app as well as talk about some of things their API has to offer.

If you have any thoughts about Mandrill, please comment below!

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241608 2012-07-26T22:56:00Z 2017-02-09T22:39:02Z Dear Experienced Ruby Dev, Come Join Our Team

by Dan DeMeyere - @dandemeyere

First thing's first, I'm not a recruiter. You won't see the following terms in this post: 'hacker', 'rock-star', 'pirate', 'guru', or 'ninja'. I'm not trying to get referral money for getting someone hired. I have no ulterior motives other than trying to find someone who has solid Ruby experience that can join our team, tell us what we're doing wrong, teach us to write code that can scale, and be a part of a team that we have worked hard to build.

Now that the disclosure is out of the way, let me introduce myself. My name is Dan DeMeyere and I'm a developer at thredUP. We have 3 development teams: web, ops, and mobile with 4, 3, and 2 developers on each team respectively. I'm currently the lead for the web team and while I have a solid and practical understanding of Ruby (our website is a Rails app), I'm no expert. I want to learn and so does the rest of the team. Some people say they want to learn, but we mean it. We do 'lunch & learns' on a regular basis to teach each other about new technologies we come across, interesting solutions we implemented, and new libraries/classes we've written, but we're still lacking that go-to Ruby developer who can push us to the next level technically.

While this post might seem like a plea, it's not written out of desperation - we just care a lot about our team and what we're trying to accomplish. We're not slouches, either. Our team is hard-working and highly productive. We roll out features every week and we make sure not to accrue technical debt along the way. We have RSpec and Cucumber test suites and a CI server to run them.

Our Technology Stack

While we're not locked into anything in particular, here is what we're currently working with:

  • Rails 3.1 (soon to be upgraded to 3.2 on Ruby 1.9.3)
  • HAML/SASS/CoffeeScript for our views
  • MySQL (and a couple MongoDB/Mongoid models)
  • EC2, RDS, and S3 for our server and hosting needs
  • Capistrano for deployment
  • GitHub for repository management
  • Asana for project management

If you're curious what else we use and work with, I'm happy to expand via email.

Working at thredUP

Working at thredUP is great. My favorite question to answer when I'm interviewing someone is 'why do you like working at thredUP?'. I have a field day with that question because I genuinely love working at thredUP. thredUP is a meritocracy. Your contributions to the team and company trump everything. Better yet, upper management cares and they pay attention to everyone. If you're putting in extra work, it will not be lost on them. They're also transparent and accessible. Our CEO shows us the same presentation he shows our board every month. He'll tell you how much money we have in the bank, what our burn rate is, what he's excited about, what he's worried about. There are no closed doors at thredUP, we're all in this together.

Also, thredUP has some great job perks:

  • Work From Home Wednesdays
  • Catered lunches twice a week (plus fully-stocked fridge/pantry)
  • No vacation day limit
  • Big ass Apple LED Cinema display and a computer of your choosing
  • Office at 2nd & Market in downtown San Francisco right next to the BART
  • Competitive compensation

I could go on about why working at thredUP is awesome, but what it comes down to is that you would be joining a close-knit, no-ego dev team that is always looking out for one another. We all tackle bugs and help with Customer Service if a user is experiencing a technical glitch. If something breaks, we don't point fingers at the person who coded it. If you write a piece of the app that experiences a lot of bugs, you're not going to be pigeon-holed into fixing them. No one is stuck 'owning' any piece of the app - we spread the knowledge so the team stays flexible. There are no rogue devs on the team - everyone contributes in a coordinated effort.

We have fun too. Other than the standard company-wide happy hours, we've been known to throw Halo gaming parties, have Scotch-drinking Settler's of Catan nights, and do things like Casino Night. You'll also hear a lot of Arrested Development and Super Trooper quotes being thrown around mixed-in with a cavalcade of sarcasm. Lastly, and what the development team holds very close to our hearts, you'll be exposed to our treasure trove of awesome Gifs that we post during the day in our Campfire chat room.

I'm not listing any ridiculous job requirements that you see in job postings. If you're genuinely interested, have the Ruby experience we need, and are able to work in SF, please email me - dan@thredup.com.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241610 2012-07-24T03:03:00Z 2013-10-08T16:13:10Z SEO: What Developers Need to Know

by Dan DeMeyere - @dandemeyere

SEO has amassed a bad reputation over the years. Is it legit? Is it snake oil? The truth lies somewhere in the middle. SEO can improve the quantity and quality of organic traffic you receive, but it's not magic. The SEO 'specialists' who suggest building link farms, low-value landing pages, and other shady techniques designed to game PageRank (Google's ranking algorithm) are largely to blame for the negative SEO stigma, but SEO is real and it is valuable.

While there are legitimate SEO professionals who can add value to your website, no one person or department should be solely responsible for SEO. SEO should be a team-wide effort.

Education, authenticity, and diligence should be at the core of this effort. Product, marketing, and development teams should all be educated on SEO and its best practices. It's up to these teams to ensure that the SEO techniques implemented are in the best interest of the user (authenticity) and that the SEO efforts are a continued focus during the construction of your website (diligence).

I recently presented my research and thoughts about SEO to our entire team at thredUP . While the presentation ( click here to view it ) was tailored to a non-technical audience, my focus in this post will be on what developers need to know about SEO.

Google PageRank

It's important to prioritize your development resources to yield the biggest SEO gains for the smallest amount of development effort possible (aka leverage). The first step is to determine your organic traffic breakdown by search engine source. If you're like us, Google is probably your #1 source (over 95% of thredUP's organic traffic comes from Google). Because of this, you should focus on optimizing towards Google's search result ranking algorithm PageRank.

Regardless of whether you're a developer or not, I believe a basic understanding of how search engines work is all that's required to be effective with your SEO efforts. Based on what Google has released themselves and from what other websites have tested and documented, there are widely known and accepted factors that play into Google's PageRank algorithm.

Discovery

Google has to be aware of your website's existence and the content you offer before PageRank can evaluate and index it. Here are some ways to ensure Google finds your content:

  • Manual submission - through Google Webmaster Tools, you can directly add your URL to Google's index .
  • Backlinks - when websites other than your own link to your website and its content, it will provide Google's crawlers another way to discover what your website has to offer.
  • Sitemaps - an HTML/XML file that lists every link you want Google (or your user) to know about on your website.
  • Structured linking - having an organized flow of links on your website allows Google's crawlers to more easily traverse your website and understand the relationships between the content on your website through your links.
  • Product feeds - if you're an e-commerce website, you can send Google your inventory along with meta information through their Google Merchant account center to help ensure what you're selling is showing up in Google search/shopping results.

SEO Elevators

There are certain things that will elevate you above others in Google search results.

  • Quality content - unique and genuine content trumps everything (ex: Wikipedia's quality of content is the reason why they are #1 in search results for almost everything). Google only cares if you're providing value to the user so if a user clicks on a search result and immediately clicks 'back' because the page wasn't helpful, Google knows and it will lower the ranking for the page.
  • Backlinks - the more websites that link to yours, the better your search rankings will be; however, the quality of the backlink is important as well. If a well-known and respected website links to your website, they are legitimizing your website through their own website's PageRank credibility. Conversely, if a small website that is relatively unknown links to your website, the benefits will not be substantial.
  • Best coding practices - being HTML compliant (ex: using alt attributes for images, title tag text for links, etc.), using a hierarchy of HTML elements (h1 tag for page header, h2 tags for sub headers, etc.) and building descriptive URL schemes will further enhance Google's ability to add context to your website's content as well as signal to Google that you take care in the construction of your website.
  • Keywords - if you know the main keywords people are using in their Google search queries for your website, which you can view inside Google Analytics, you can then tailor your meta information content around those keywords to further emphasize their importance.
  • Speed - Google knows how long your page takes to load (Google Analytics) and they know users don't like to wait so the speed of your website will impact your ranking.
  • Usability - Does your website need JavaScript to work? Do you have too many ads? Is your layout conducive to a good user experience? Google's latest PageRank updates include the ability to load JavaScript and CSS to analyze the layout and usability of your website.

SEO Detractors

In general, if you don't follow the best practices mentioned in the list above, you're going to detract from your website's SEO potential. However, there are some things you can do that Google will penalize you for if they find out you're doing. If you want to see an example of how serious Google's penalties are, click here and once the slide loads, click 'i' on your keyboard.

To avoid Google PageRank penalties, stay away from these SEO tactics:

  • Keyword stuffing - if you embed too many keywords into your website's meta information HTML tags, Google will have a tougher time deciphering the signal from the noise. If you really abuse the use of keywords, Google can penalize you for intentionally trying to game the system.
  • Cloaking - putting scrape-able content on a page, but hiding it in from the user is a big Google 'no-no'. Google can load the CSS on your website so they know what content is visible to the user and what you're hiding from them.
  • Low-value pages - creating hollow pages that don't provide a lot of quality content to the user. This is also known as 'content farming'. Good for traffic, but bad for the user. If you create keyword landing pages, make sure you're providing value to the user.
  • Link farms - if you try to 'game' the number of backlinks your websites has by creating new websites that link back to your original website, Google will find out and it will hurt your page ranking.
  • Plagiarism - if you use other people's content, Google has ways of detecting who the real author is and they can issue page ranking penalties if you claim the content as your own.

SEO Summary

Users come first. Google's search engine is designed to give users the information they're looking for as fast as possible. Google doesn't care about your website, it's all about the user so make sure you're building new features for the user and not building features to attract more organic traffic. If you build a genuinely great product for your users and follow best SEO practices while building it, then you'll see results in your organic traffic. Here are some guidelines to follow why you're building your product:

  • Define & communicate keywords - determine what keywords perform best for your website and communicate them to your development team so they know what meta information to focus on (i.e. page titles, meta descriptions, alt text, etc.).
  • URL Naming - take URL naming seriously! Grab a developer and bounce naming convention ideas off of them. When you change a URL at a later date, it hurts the page's SEO juice. 10,000 backlinks to one link is greater than 5,000 backlinks to two different links that end up at the same page. Think about it.
  • Quality content - the copy/content on a page matters. Copy and pasting is the devil for SEO. Be creative, take your time, and make it count.
  • Sitemaps - are you adding a new feature? Make sure you add the appropriate links to the HTML & XML sitemap. Make sure you have FAQ content in place.
  • Don't be lazy - have an image? Make sure there's alt text. Have a link? Make sure there's title text. Use the right elements (h1, h2, p, etc.).
  • Think hard on the meta - when you place the alt text, make it meaningful. If you care, you'll take the time to do it right.
  • JavaScript beware - if a link requires jQuery to work, you're doing it wrong. If a bot can't get from page to page, you're doing it wrong. Disable JavaScript and load your website, the content you want indexed better still be there.
  • Plan from the beginning - before you start coding, think about SEO. SEO can dictate execution (i.e. don't bring content in via AJAX).
  • Natural feature flows - does the feature flow intuitively? Is it organized? Is there a hierarchy? Can you a build a directory page for it?
  • Don't be shady! - Google doesn't like it. Developers don't like it. It feels dirty-all-over to code something that does not benefit the user and has negative, ulterior motives.

If I missed anything or you disagree with my conclusions, please comment below.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241613 2012-06-25T14:10:49Z 2013-10-08T16:13:10Z Upgrading Rails from 3.0 to 3.1: What the Guides Don't Tell You

by Dan DeMeyere - @dandemeyere

We upgraded the thredUP app from Rails 3.0 to Rails 3.1 this past week. On the day we deployed the upgrade, our test suite was green and everything looked solid locally and on our staging server, but out of sheer paranoia I decided to take one of our front-end servers off of the load balancer and deploy to it so I could poke around on the Rails production environment....and then a bit of chaos ensued.

There are some aspects of a production environment that can't be replicated in your test suite and can't be invoked locally. A good example of this is SSL. Did you know that `ssl_required` was deprecated in Rails 3.1? I didn't (here's the solution btw). Then there are production-only gems to worry about (ex: monitoring services). New Relic's gem required a bump for Rails 3.1. And by required a bump, I mean the gem caused the entire app to crash with a cryptic error message. Fun!

You're probably thinking that this upgrade lacked preparation. Quite the contrary. Before I started the upgrade, I read guide after guide about upgrading to Rails 3.1. Because we wanted to upgrade in phases, I wasn't interested in the asset pipeline requirements, but it seemed as though that was the only thing every guide was focusing on. What the guides weren't focusing on were all the gems that would have to bumped (and their dependencies) and what the ramifications of those bumps would be. ActiveRecord, Cucumber, and PayPalNVP were a couple gems in particular that had noticeable changes that deserved some love in the upgrade guides. So that's what this post is, some love for the not-so-obvious deprecations that should give you clearer expectations of what you'll be running into when upgrading to Rails 3.1.

Active Record

Condition hash syntax is on it's way out. It's been fully deprecated from named scopes and the `find()` method.

Another big gotcha is 'include_root_in_json' being defaulted to false for every model now. Here's the difference:

Small difference between the two, but if all your JSON responses previously had the root included and you have external apps (iPhone, Android, etc.) relying on your app's internal RESTful API (like we do), then it's a significant difference as it's possible all those API endpoints will have to changed or the apps will break. So how to change this? Below I'll demonstrate how to set an app-wide default and then how to override the setting in individual models.

Gem Bumps

Almost every Rails app relies on a myriad of gems. Including our development and test environments, thredUP uses over 50 external gems and a handful of our own private gems we've created over time. When we upgraded from Rails 3.0 to Rails 3.1, there were some notable gems that required version bumps.

There are some big boys in there (ex: mysql2 is our MySQL adapter). If you use any of the above in your app, which I'm betting you do, make sure to take note of their version bumps

PayPalNVP

If you use the PayPal NVP gem, expect to make some changes. The biggest for us was that the PayPal NVP API requires all authentication upon object instantiation. Example:

If you do full 3rd-party API mocking in your test coverage, you could see how this change would be problematic.

Lastly, make sure to read the Rails 3.1 release notes or Ryan Bate's gist of 3.1 RC4 changes. That's where some of the standard Rails-only deprecations will be. The biggest one to note - all forms (or any Rails view helper that results in HTML being outputted into the source) requires a `=` instead of `-`. It's a simple change, but if you use a hyphen on accident, nothing is outputted to source and it can be quite confusing to debug since no errors will occur and nothing is in the HTML.

If you came across any unique errors when you upgraded your app to Rails 3.1, please mention them below in the comments and I'll update the post afterwards to include them.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241614 2012-06-19T06:08:00Z 2017-02-09T22:40:42Z Refreshing Web Design Trends

by Dan DeMeyere - @dandemeyere

In the past year, the post 2.0 web design era has really come into its own. I wrote about the start of this when web designers first started to embrace HTML5 and what it has to offer, but the time has finally come when over-the-top Flash websites, such as this one, are no longer the standard for those that are pushing the envelope for creative web designs.

Here are 4 websites who have utilized HTML/jQuery and creativity to build refreshing designs:

SlaveryFootPrint.org

Go to SlaveryFootPrint.org. It's ok, I'll wait for you to actually click the link. Once the page loads, you'll notice you have two options. Click the first one titled 'What? Slaves work for me?'. Now start scrolling. Killer, right? With a little bit of jQuery and some clever CSS, this website is re-inventing the way to deliver information to users. Lines of text are brought in at the same pace as your scrolling, allowing you to control how quickly you want to read their text. On top of this, they have different graphics pushing the text into place to make the reading experience fun.

JamesAnderson613.com

A lot of freedom is granted when designing a personal website. It can say anything and look as wild as you want it to look. Shockingly, one of my favorites is not a designer or a developer's website - it belongs to a cricket player. James Anderson's website is awesome and it's showing off one of my favorite trends: making numbers sexy on the web.

HTML5, specifically the addition of Canvas and SVG, has given designers free reign to design graphs, charts, and other data visualizations without having to worry about how they will be built. Thanks to JavaScript libraries like D3.js and Highcharts.js, developers no longer have to worry about Flash or any other antiquated plug-in for interactive visuals. And while D3/Highcharts are two of my favorites, there are plenty of other emerging solutions.

Tangent: if you're someone who is looking to make your own website, I strongly recommend you look over this Quora list of best personal websites on the web.

DIY.org

DIY.org is a very cool website. It's a kids website that highlights really cool things that kids and parents make and it's also showcasing the last trend I'll mention in this post that I'm excited about - high powered web graphics. In the past, image sizes were always a huge cause for concern because of bandwidth limitations and costs. As broadband internet becomes more prevalent and static asset storage (ex: AWS S3) becomes more and more affordable, image sizes become less and less of a concern.

When you go to the DIY homepage, you're presented with a stunning graphic. After a couple of seconds, the text overlays disappears and you're able to grab the graphic and move it to explore (with a nice easing effect to simulate gliding). The entire graphic is larger than 3000x3000 pixels, but through some clever coding the actual images that compromise the entire graphic are 35 387x387 image tiles. I would also bet good money that only the necessary tiles are loaded until you try and explore surrounding tiles. The reason why I think this will become a web trend is because this same high-quality graphic tiling technology could be used for a web-based video game built on top of HTML5. A 'level' in the game could be hundreds of thousands of pixels, but when the graphics are loaded only when needed, it could be realistically done with ease.

DangersOfFracking.com

This last website is a combination of all three trends I mentioned: re-inventing the way to deliver information, beautiful metric visuals, and high powered web graphics. By utilizing all three, the creator of this website was able to bring to light a very serious topic (Hydraulic Fracturing) and deliver the information on the topic to you in such a compelling way that you want to learn. You need to keep scrolling. They have gamified the process to digest information on the web. It's brilliant.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241616 2012-02-21T22:01:08Z 2013-10-08T16:13:10Z JavaScript Namespaces

JavaScript is a mess. Well it can be, and it certainly used to be once upon a time. I believe the biggest problem is that there are so many ways to write JavaScript, that finding "the best way" is nearly impossible. Doing so while working in a team with variable JavaScript experience is that much harder.

Here are some examples of how different people might code the same behavior:

The Worst

This style has been dead for a long time (phew), mostly thanks to the arrival of Prototype, jQuery and the other major JavaScript libraries. However, we still see a lot of "floating" function definitions of this type. This is scary for 2 reasons.

  1. It is difficult to find semantically related code. For example: baz() might handle something entirely different from foo(), forcing you to scroll around looking for one or the other.
  2. You might be overwriting something from another library (or vice versa).

The second point is the one I'm targeting in this post. What would happen if I were to include a second JavaScript file with a separate function also called bar()? The first implementation would be blown away by the second one.

A Better Approach

Wrapping JavaScript objects is a better approach. This time, we can have a more meaningful name to separate a block of code. But there's still a small possibility that this could break! What if I downloaded some cool new JavaScript library the next day to handle something else, but it just happens to have an object named stuff that it needs in order to work correctly? Things would explode!

This calls for extreme namespacing!

Don't Be Afraid to Make Things Longer Than You Might Think They Have to Be

Mmmmm names! Here I create a thredup object that will store all things thredUP. Normally, I would put the thredup = {} in a separate file that would always be included first (kind of like jQuery). You might even put some stuff in the definition, like standalone utility functions or useful "static" variables. Now we have to refer to our stuff object via its full name, thredup.stuff. It should live in its own file called thredup.stuff.js. This ensures that the only way to "accidentally" overwrite something is by not looking at the JavaScript includes (which never happens, right?).

I do a lot of other crazy things in this snippet. I use a self-calling, anonymous function and pass the thredup object right in. I do this just in case we're going to add more than one object to thredup that might need to share data through closure.

The function for stuff creates an object that I call self which represents the "true" object. This allows me to have private members. The variable foo is passed in as an argument. I then create a new var foo and assign it to the value passed in. This ensures that the local copy of foo is set for the rest of the closure. Every method that I go on to define will have access to that foo, but foo cannot be accessed outside of the class, as in line #29. This is also true of privateMethod(). I didn't stick it on the self object to keep it private. The thredup.stuff function then returns the self object, which exposes the functions bar and baz.

Huh?

JavaScript can get a tad confusing, especially when we stretch it to its limits, but I think writing objects and widgets this way leads to cleaner, more legible code. I also think it's a little easier for someone to grok coming from a more tradional OO language (self.foo => public, var foo => private).

Please feel free to ask questions in the comments, I'd be more than happy to answer them.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241618 2012-02-10T21:55:00Z 2013-10-08T16:13:10Z Chrome Developer Tools Tricks & Tips

by Dan DeMeyere - @dandemeyere

If you're a web developer and you have spent time writing JavaScript and/or styling with CSS, odds are you're familiar with Chrome Developer Tools (CDT). When I discovered CDT, my front-end productivity sky-rocketed. Editing CSS on the fly, inspecting HTML, and interacting/debugging with the page's JS console have become more engrained in my day-to-day workflow than any other development tool I use. I love TextMate, but if someone put a hypothetical development gun to my head and asked me to choose between CDT and TextMate, I would be asking around for good a Vim tutorial if you know what I mean.

Quick disclosure: all of the tricks and tips below have been tested with CDT, but there's a chance they won't work with FireFox's Firebug Add-on.

JavaScript Debugging

There are three very useful tools in CDT to debug your JS. The first is the most obvious - the interactive JS console. When you open up the developer tools (⌘ + Alt + I), the interactive JS console resides with the 'Console' tab. You can also press (⌘ + Alt + J) to dual-load CDT and the console tab. This console allows you to interact with the page's DOM. You can execute any JavaScript you want here and also interact with objects (like the window object or any local objects you've defined) and inspect their contents/functions. One important note is that when this console loads, the DOM has loaded as well, which means if you're using jQuery and it's working on the console, but not in your source file, you're probably missing a jQuery $.ready() wrapper.

The second debugging tool is fairly new to me. If you click on the 'Scripts' tab and choose one of the JS files included in the page, you're able to add breakpoints to the code by clicking on the line numbers. Once the breakpoints are added, you can step through every breakpoint. It can be tedious tracking down the code you want to put the breakpoint in if you have many files. It's especially painful if you're writing in CoffeeScript as the line numbers you're editing in your .coffee file differ from the compiled line numbers of the same .js files (what you see in CDT).

Last week a colleague of mine, Wilson Collado, told me about the 'debugger;' call. If you put 'debugger;' in your JavaScript, it's the equivalent of putting a breakpoint through the scripts tab. It's a really useful trick that can save you a lot of time. One last thing about the scripts tab - make sure you take a look at the call stack information panel (located on the right side of CDT's scripts tab). In this panel you can also inspect all the local variables at the time in which the breakpoint/debugger was reached. Very helpful when stepping through your JS.

I would be remiss if I discussed JavaScript debugging and didn't mention the 'console.log()' function. In your JS, you can pass any object to console.log and it will print out the object on your console when the code is executed. You can then inspect the object to look at its value/properties. Simple, but very useful. In addition to log(), console also has warn(), info(), error() and assert() functions. Assert is really nice as it will evaluate the contents of whatever is passed into the function as a boolean. Beware that leaving in console calls will break your JavaScript in Internet Explorer, so make sure to take them out before you commit your code.

HTML/DOM Tricks

If you right click on any part of a website in Chrome (with the exception to a website built in Adobe Flash), there's an option called 'Inspect Element'. This is any front-end dev's bread and butter. You can view/edit HTML, traverse the HTML hierarchy, view CSS for any HTML element...the list goes on. For the most part, inspecting the HTML is elementary (yes, pun intended), but did you know that if you select an element, go to the console, and type '$0' - the last element you inspected is now tied to that variable? I didn't. Will User dropped that gem on me last week. The $ command dumps previously selected nodes, try typing $2, $3 and so on.

Here's another one that is news to me. There are certain CSS selectors that are used on every custom-styled button (or link). They are ':hover' and ':active'. If you have a button that is supposed to change it's image when you cursor over it or press it, you're invoking one of these selectors. It's a pain to style them though because when you hover over the element, the CSS properties show but you can't edit them without moving your cursor over to the CSS property (it's hard to explain, just try to do it and you'll see what I'm talking about). Enter the 'Toggle Element State' feature. In the CDT CSS pane, click the 'Styles' arrow and then click the button with the dotted square with a cursor in the center of it. Now you can toggle these selectors. See below:

Console DOM Tips

Lastly, let's go over some tips that involve both the console as well as the Document Object Model (DOM). Fire back up that console press (⌘ + K), which will clear the console. Grab an element off the DOM with jQuery (or do $0 if you've recently inspected something). Now call 'dir()' and pass in the element. As you'll notice, 'dir()' will print out every attribute/property that exists for the element. There are definitely some properties I didn't even know existed after using it for the first time.

Let's say you're generating something on the console. Maybe a calculation or you're scraping a page for some text. Did you know that you can pass in JavaScript to the 'copy()' function and it will copy the result of the JavaScript to your clipboard? The potential uses are probably not obvious, but it's another solid tool to keep in that JavaScript debugging toolbox.

If you have some CDT functions/tools/tricks that I didn't list, please share them in the comments.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241620 2012-02-03T21:35:00Z 2017-02-09T22:41:15Z Design Inspiration

by Dan DeMeyere - @dandemeyere

Being inspired is one of the strongest types of motivation for me. When I see something great, I can become inspired to build something of equal caliber or at least put more effort into what I'm currently working on. Sometimes the execution of an idea can be just as inspiring as the idea itself. One great example of this is Path. The more you use their app, the more you can appreciate how much attention to detail was involved in building the app. Every interaction, every pixel, is perfect. It's hard to put into words the experience that is felt when you use something and everything is just 'right'. 

So just in case you need some inspiration, here are some websites and Dribbble shots that I find inspiring.

www.BlameStella.com

Other than giving thredUP a great score, Stella's website has nice, non-obtrusive color tones that emanate an inviting feeling. It's refreshing to see a website that isn't exploding with 20 call to action buttons.

www.1minus1.com

I love the graphics on this site. A well crafted back-end can be just as important as a pixel-perfect front-end and the illustration of design and development being poured together into a funnel is a great visualization of this.

Fork CMS

When I'm looking to use a new library, gem, or application, I find that the presentation of the product impacts my judgment towards my willingness to give it a try. Fork CMS is a great example of this. There's no reason for me to use a new CMS, but after visiting their website I wanted to download it and try it out anyways.

Dribbbles

Now it's time for a couple of Dribbble shots. This Twitter share shot is particularly awesome for a subtle reason. Twitter makes it very hard to change the style and design of their Tweet button. You can do a custom-designed button that links to Twitter.com, but the user has to leave your website to make the tweet. To allow the user to tweet without leaving your website, you have to use their pre-built button/widget. This person designed around the button, instead of re-designing the button itself, which keeps the best functionality intact while making the developer's job easier.

No explanation needed for this one, I just think it's awesome and would be a really cool login interaction.

I'm not embarrassed to show the inspiration (or exact copy) of my work portfolio design. I was brainstorming the best way to allow users to interact with my professional experience on my website and a colleague of mine sent over this Dribbble. The UI looked so cool that I wanted to code it up just for the fun of it, but I liked it so much that I ended up keeping it.

A lot of these websites were discovered on line25's Websites of the Week series. It's a great RSS feed to follow if you're a fan web design. If you have any websites you would like to share, please link to them below in the comments.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241621 2012-01-31T06:38:00Z 2013-10-08T16:13:10Z New Additions for your Rails Toolbox

by Dan DeMeyere - @dandemeyere

One of the biggest advantages to using Ruby on Rails as your web framework is the open source community. They unselfishly release new gems every day. They build the best tutorials for free and they commit to the latest Rails builds on a daily basis. The only variable left in the equation when you're building a Rails app is knowing what tools to use when approaching a project implementation. Having a solid toolbox at your disposal is essential to furthering your professional development as it allows you to take on more challenging projects that are outside your comfort zone.

When I first started developing with Rails, I was blown away by my CTO's ability to nonchalantly take on challenging projects. The project or feature would seem outlandish and at times borderline impossible, but he would say something like "we'll figure something out" and somehow he always did. It seemed like magic. After watching closely for many months, I came to realize he had three things: a positive outlook, a solid toolbox, and a creative imagination. The combination of the three were a potent arsenal for tackling epic projects. He wouldn't think about why it couldn't be done - he would start in the ideal world that everything's possible and begin brainstorming from there.

Having a good outlook and creative imagination is not something you can attain from reading a blog post, but you can always add more tools to that handy toolbox. Here are four new additions to my toolbox; hopefully one of these will help you one day when you come across a new project.

Storing your ActiveRecord Objects in Redis

Link to article →

Speed is important. Rails contains a fair amount of overhead and it's considered one of the harder frameworks to scale. It isn't an issue when you're website is small, but as you grow you'll discover some of the earlier code that was written in your app won't hold up as the user base grows. This results in feature re-writes and becoming more clever with your implementation. One of the first tools to reach for when attempting to speed something up (outside of profiling your DB queries) is cacheing. We use memcached at thredUP, but the problem with memcache is that the data isn't persistent. If your memcache server goes down, you can lose all the cached data within it. This makes memcache only feasible for information that can afford to be lost. What about data that needs to be written to the DB? You can't reliably do it with memcache, but you can with Redis.

The tool I linked to above is a gem that the fine folks of Gowalla, recently acquired by Facebook, wrote to accommodate quick writes and reads of an ActiveRecord object. Their example is 'liking' something. You have to be able to like something quickly and pull back information on that like such as how many others have liked it and who they are. Personally I've been wanting to re-write thredUP's in-house notification system for a while as it hits the database on every logged-in page load and this is going to be the first tool I reach for when that time comes.

Converting Dates Between Ruby and JavaScript

Link to article →

This one is very straightforward. In multiple parts of our app, specifically areas of the app we're doing background AJAX polling, our JavaScript scrapes time data off the DOM (like timestamps of objects stored on an attribute of a div). The problem is that JavaScript's time (local) and our server's time (GMT) are in different time zones. The link I listed above will give you a quick and easy way to go back and forth from Ruby's time class to JavaScript time class.

Lazy Evaluation and Method Chaining

Link to article →

Most people know about method chaining in Ruby, especially within the context of ActiveRecord methods. Most people also know that an ActiveRecord call does not hit the database while it's still an ActiveRecord::Relation object, which is an example of lazy evaluation. The real tool comes from using both of these together.

The best example I can give is compound filtering. If you go to an e-commerce website and select 'Sort By: Lowest Price' and 'Gender -> Men', those are options that will compound to filter your results. One way of implementing this is having a nasty 'if params[:sort] = "Lowest Price" && params[:gender] = "Gender"' block for every possible filtering scenario. That is obviously not ideal. But if you were to add some named_scopes that build conditions for each individual filter and method chain all the compounding filters onto a ActiveRecord::Relation object that executes its call to the DB at the end, then that would be a clean way of doing it.

ActiveRecord Key Value Stores

Link to article →

Key value stores are awesome. They're super simple and super fast. To get geeky for a second, searching with key value stores in big O notation have a time of O(1) (that's the fastest). I like to hack together Hash look-up objects in Ruby all the time to speed up checking whether a ActiveRecord ID is in a collection. The alternative is using an array (assuming it can't be looked up in the DB) and even if the array is sorted and the array look-up call is using a simple B-tree algorithm, the notation is still O(log n). You just can't beat key value stores for look-ups.

The link I referenced is leveraging the method_missing method of a class as a way to arbitrarily set key values. I would never use something like this on a heavily used model such as User as using method_missing like that is dangerous, but it does make sense in context of ApplicationSettings. Either way, it's a clever implementation for dynamically setting class attributes and it opens up your imagination for ways in which it can be used.

Have a tool you would like to share? I'd love to hear about it below in the comments.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241623 2012-01-06T22:38:00Z 2013-10-08T16:13:10Z Facebook FB.ui() apprequests IE Bug

by Dan DeMeyere - @dandemeyere

That title was a mouthful! This post is a simple bug fix for an IE 7/8 issue that I couldn't find any solution for on the interwebs. The bug is as follows: if you use the Facebook JavaScript SDK to send apprequests (the invites that show up in the Facebook notifications jewel) and you're doing so through the FB.ui() method, the default settings will not work in Internet Explorer 7 and 8.

FB.ui() requests use Facebook Dialogs to render the modal. In your FB.ui() call you can reference the way in which you want Facebook Dialogs to render the modal. The options are 'page', 'popup', and 'iframe'. The Facebook documentation states that 'page' is the default, but for FB.ui({method: 'apprequests'}) calls, 'popup' is the default - which looks great in Chrome/FireFox/Safari; however, in IE, the modal never appears. So for IE users you need to set the display method to 'iframe' and it will work fine.

All you have to do is write some simple JavaScript browser detection code to toggle the display method. Here is my CoffeeScript to do this:

In case you're not a CoffeeScript fan, here is the equivalent to the above in JavaScript.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241625 2011-12-23T16:34:37Z 2013-10-08T16:13:10Z Don't get stuck in development mud Anyone that has ever done an online startup knows that one of the hardest decisions you continually have to make is where to spend your developer time. The game is a bit of a paradox too because the more you do, the more you have to maintain. This maintenance is "developer mud", and it includes things like, coding new emails, updating old ones, testing alternative views & user flows, improving metrics visibility - the list goes on and on. All these activities are important but the big challenge is deciding when they should take priority over new features.

It's true, a new feature is unlikely to be the game changer it appears to be when it was conceived, but the same can be said for the maintenance. The big difference is how much technical debt you are building up by doing an activity. For example, building a new section of a site for users to communicate on a simple Facebook-like wall might be a great addition for users, but along with that comes the complexity of more schema, more tests, user generated content (an possibly moderation) and other ancillary needs.

One of the places we find ourselves a bit stuck in the mud at thredUP right now is with email. We send a lot of system email and a lot of marketing email. As easy as rails makes email, it is still far too painful. While solving this is not going to reduce the technical debt load, it could take the maintenance side of emails out of developer hands and give our marketers the ability to experiment and test emails freely. My next post will explore a few options out there for us with respect to transactional email.]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241628 2011-12-15T03:14:00Z 2013-10-08T16:13:10Z Rails Association Tips

by Dan DeMeyere - @dandemeyere

Rails is one of those languages where you can infer how it should interpret code. A more simple way to word that is this: sometimes you'll be coding something for the hundredth time and all of a sudden you'll think to yourself 'I wonder if this will work' and you'll try to change a convention based on your previous experiences with that language. This recently happened to me and I wanted to share what I discovered.

We have a lot of named scopes in our app. Typically our named scopes look something like this:

In that example, our User model has_one :concierge_information (btw, definitely check out our new thredUP Concierge app). While I was creating this scope, I wondered if I could infer which 'joined_via' column I was referencing since I was querying over two tables. So I tried this:

And it worked! Much prettier, right? That example doesn't save a lot of keystrokes, but it should help keep things organized if you're joining across multiple tables and/or querying a lot of ambiguous columns (when you're query references a column name that exists on multiple models you're querying across). While I was writing this, I thought it might be nice to share some other association-related tips I've learned.

ActiveRecord Association Tips

Rails associations are awesome and they come with a lot of built-in class methods/helpers. The basic rule of thumb that I use is that if you're accessing an object's association, you probably have access to the ActiveRecord methods the association's model has. For example:

In that example, you can do any ActiveRecord query on that user's orders as if you were querying a subset of the Order table itself. This can be very handy if you have nested associations.

Object Association Tips

One that I recently came across was the built-in method for creating an associated record that is a has_one association. It ended up just being 'create_#{association name}'. It's not life changing, but it's nice to know. Here it is in action:

In case you're wondering what the has_many version looks like:

This brings up an important note about something I was fuzzy about until recently. When are these objects saved? Obviously a .create() call will write to the DB, but what about when you create/build the object outside of the association and then assign it afterwords? Here are some examples (these are all under the pretense that @user is already stored in the database):

That was helpful for me to learn as unnecessary .save calls result in additional calls to the database.

Lastly, I came across an edge case scenario that I was curious about. If you're using the '<<' operator to associate newly created objects, how do you know if the object you're creating failed? The answer is you don't. However, you can use the .push() method and it will do the same thing as the '<<' operator except that it will return true or false depending on whether the creation was successful or not.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241630 2011-11-11T04:18:11Z 2013-10-08T16:13:10Z Setting Up Sinatra with MySQL and ActiveRecord

by Dan DeMeyere - @dandemeyere

Sinatra can be used to do a number of things. Here at thredUP, we're using Sinatra to turn some of our most used models into services accessible through an internal API. The Sinatra apps that we're building will still connect to our main Rails 3 app's MySQL database, but we'll be able to spin up multiple instances of our most used services which will help us scale as we continue to grow.

We use ActiveRecord for the models we're turning into services so when I was setting up the Sinatra app, I figured a simple Google or StackOverflow search would yield the necessary information I needed to connect the new Sinatra app to the database through ActiveRecord. Nope.

If you're looking to use sqlite3 with ActiveRecord or DataMapper, then this StackOverflow post will be your guide. For those with ActiveRecord and MySQL, here's what you have to do:

Assuming you have the 'active_record' and 'mysql2' gem installed already, all you need to do is create a config file that you require in your main Sinatra .rb file. The contents of the config file are relatively straight forward:

The env variable is important if you have different database settings for production, development, and any other environments. Here is an example of what database.yml might look like:

I know this is simple and easy, but I figure I'm not the first person to have to come across this.

Two other useful Sinatra resources:

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241632 2011-10-17T18:08:54Z 2013-10-08T16:13:10Z Resizing the Root Disk on a Running EBS Boot EC2 Instance
I routinely go back to this post - Resizing the Root Disk on a Running EBS Boot EC2 Instance by Eric Hammond - when I am working on anew AMI and want the EBS volume to be bigger.  I have created a shell script of his commands that I will use from now on and thought I'd share it.  Enjoy.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241634 2011-10-12T18:52:32Z 2013-10-08T16:13:10Z Free up "inactive" memory on OS X Have you ever witnessed this?

Why on earth would OS X keep 3.97 GB of inactive memory rather than releasing it?  I think it's to let closed programs launch faster but frankly, I closed the program on purpose so I wish I could turn this off.  While I haven't found a way to turn off this feature, there is a way to circumvent it - pronounced cirsumvent by anyone in the know - and get your RAM back.

Enter "purge".

If you have xcode installed you already have a command-line tool.  Just type "purge" in the command line and voila:

If anyone knows of any problems with this, please reply in comments and let us all know!
]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241636 2011-10-12T05:20:46Z 2013-10-08T16:13:10Z CoffeeScript for Beginners (Part 2 of 2)

by Dan DeMeyere - @dandemeyere

My first post on CoffeeScript was primarily focused on getting CoffeeScript installed and having Node.js continuously compile your .coffee files. I touched on creating simple objects and iterating in a for loop, but this post will be all about the standard .click() handler and AJAX function (as well as some nice tricks).

Click Handlers

The .click() handler is probably the most used jQuery method and it happens to be an easy implementation in CoffeeScript. For the rest of this post, let's assume you have a JavaScript file (app.coffee) with a .setup() method that is called on page load. So somewhere in your view (after you have included the compiled JavaScript file), you have something that looks like this:

That code will ensure your app.setup() method is called after the DOM has loaded and your JavaScript is aware of every element. So what does your .click() method look like? Like this:

AJAX

AJAX, which stands for asynchronous JavaScript and XML, has become a staple for Front-end engineers. Anytime you want to talk to a database or load some foreign information without refreshing the page, you'll turn to AJAX. So for the sake of our example, let's say you want to make an AJAX call when someone clicks on a button. I'll assume you know what the JavaScript equivalent is and I'll only show the CoffeeScript:

Going the Extra Mile

There are certain best practices that can help deliver the best and most responsive experience to your users. There are two things in particular I like to do when I have a button that triggers an AJAX call.

The first thing is adding a disabled state when someone clicks the button. The user may not notice the activity spinner in the browser's tab so having a different button design that indicates a change will let the user know that the website is responding to your click. It will also give us a flag to determine if the button has been clicked so we don't make more than one AJAX request if they double click it. Keeping all this in mind, here is what the click handler looks like now:

This does 2 things. The preventDefault() call is there in case you have a valid href attribute on the anchor tag. This will prevent the link from executing. The second thing in here is adding the class 'selected' to the button and then surrounding the AJAX method call with a .hasClass check to make sure the 'selected' class hasn't been added already.

The other thing I like to do is handle the data response from the AJAX call. If something goes wrong in your controller (I'm assuming you're using a MVC framework), but all of the code executes anyways, your JavaScript is unaware that something went wrong and you'll never enter the 'error' block of your AJAX call. This why I always set an error flag in my controller that I pass back. Here is the block of code you'll need in your controller to respond to a AJAX call.

Now that our controller is passing back whether an error occurred or not, our JavaScript can handle accordingly. Here is everything coming together:

Useful Resources

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241637 2011-08-30T23:53:00Z 2013-10-08T16:13:10Z Parallel tests with the thredUP app

The parallel test gem is relatively simple gem to install, that said, it took me a while to get it to work properly, here's how you do it n 6 easy steps.

1. Add the gem to the test section of the gemfile

2. Now we need to change the database.yml file in order for the gem to create the databases you need (the gem will create as many databases as the number of cores on your processor). Edit the test section of your database.yml file to look like this:

3. Next we need to setup the gem’s rake taks. To do this, add the following line to the rakefile:

That’s it! We are done with the parallel_test gem setup. In order to run parallel tests we need to prepare the database.

4. To do that we run the following command:

rake parallel:create

5. Next we need to prepare the test databases with the database schema. Before we do that, we need to run any migrations that have not been applied to the test database to do this run:

rake RAILS_ENV=test db:migrate

rake parallel:prepare

6. Last, we need to configure sphinx! Paste the following code in your test section in the sphinx .yml file

That's it!! To run the tests use the following commands

rake parallel:features      # Cucumber

These commands do not work as one would expect though, so I added a rake task to make things a little bit easier. If you want to use the parallel test gem just use:

rake thredup:faster_cucumber

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241639 2011-08-25T04:24:00Z 2017-02-09T22:41:00Z From the Dev Couch: Redwood, Backbone, and Capfire
by Dan DeMeyere - @dandemeyere

Apps to Improve your Workflow

App: Redwood
Link: redwoodapp.com

I'm about to gush: <earmuffs> this app is fucking awesome </earmuffs>. I hate searching for things - it's a complete waste of time. A while back I installed Alfred App to allow me to 'jump' straight to launching an application after hitting a hot key and it's a huge time saver (not to mention I never have to organize my application folder again). So Alfred is for your apps, but what about for all your files? Enter Redwood.

If you're like me, you have emails, messages, and attachments scattered throughout Gmail, Basecamp, and Dropbox. Only Dropbox is mounted locally and I use Thunderbird for my mail needs so I have to log into Gmail to search and I just pray that I'm searching on the right Gmail account. Redwood is a local app that allows you to add Gmail, Basecamp, PivotalTracker, and Google Docs accounts to and gives you the ability to search across all of the them.....amazing. So I have Alfred mapped to Ctrl + Spacebar, Redwood to Opt + Spacebar, and Spotlight to Com + Spacebar. Hello efficiency world.

Learning About New Technology

Technology: Backbone.js
Example: todos.js
Homepage: documentcloud.github.com/backbone/
GitHub: github.com/documentcloud/backbone/

If there was a ranking for the adoption rates of new (so excluding jQuery/Prototype) JavaScript frameworks/libraries, I would bet Backbone.js is second only to Node.js.

Backbone.js allows you to create a MVC of sorts inside your JavaScript. If you're anything like us, a lot of the functionality on your website is handled with JavaScript. Whether it's AJAX posting, error checking, or complicated modals, odds are you've dealt with JavaScript spaghetti. So similar to Rails, or any other good MVC, Backbone provides you with a framework of how to organize, write, and structure your code. It's sort of hard to explain, so here's the official summary of Backbone.js:

With Backbone, you represent your data as Models, which can be created, validated, destroyed, and saved to the server. Whenever a UI action causes an attribute of a model to change, the model triggers a "change" event; all the Views that display the model's data are notified of the event, causing them to re-render. You don't have to write the glue code that looks into the DOM to find an element with a specific id, and update the HTML manually — when the model changes, the views simply update themselves.

I haven't created anything with it yet (my learning plate runneth over), but give it a try and let me know what you think.

Rails Gems Worth Checking Out

Gem: Capfire
Link: rubygems.org/gems/capfire

The thredUP Dev Team has started to use Campfire as our chat collaboration tool of choice. Questions, Github commits, and countless emoticons dominant the chat room. As a development team of 10, we come across many issues about learning different parts of our app or programming language specific questions and it doesn't scale to just ask one person every question - so we post the question to the Campfire chatroom and let the first available person answer it. So far it's working out really well. So where does Capfire come into play?

Capfire is a gem that automatically posts a message to your Campfire chatroom after someone (namely someone who has Capfire installed) deploys your app. We already have Github hooks in place for pushes to remote repositories, but I think it's valuable to know when new code is being deployed to an app - especially if you have code in that deploy. To install Capfire, check out these install instructions as they're the best I came across.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241640 2011-08-19T03:46:00Z 2013-10-08T16:13:10Z The Power of Lambda, Bindings and Blocks

by Dan DeMeyere - @dandemeyere

I've been slowly making my way through the Dave Thomas series of Advanced Ruby vodcasts. The series has been very enlightening as I'm learning a lot of powerful tools that Ruby offers as a programming language. A particular topic that he covered that sort of blew my mind was Ruby bindings and lambda.

Lambda

Lambda is something I've been working with for a while. The two main ways we utilize Lambda in the thredUP app is with named scopes and dynamic hash values. Examples:

The examples above are pretty standard usage of lambda. What I didn't know is that you could do something like this:

I think this is particularly useful in the event you're lambda block takes in options. Typically I see that people construct an options Hash object and then pass it in. By denoting the asterisk (*) before the last parameter, it allows you to just pass in all your options within your method call. If you have a lot of options, the call will look unwieldy and the Hash object creation is probably best.

Bindings

Yesterday was the first day I had ever heard the term bindings within the context of Ruby. Although I'm still trying to think of useful implementations of bindings inside of our thredUP app, the knowledge of how to use them should become a handy tool when brainstorming solutions in the future. My understanding of a binding is that it allows you to hook into's a block execution and retain the ability to enter into that block through the hook at will. Ruby-doc says that bindings 'encapsulate the execution context and retain this context for future use'. I like my definition better.

So once you create a binding, you're able to invoke said binding using the Kernal#eval method. Here is a simple example:

One could easily argue against the practicality of this example, but it was the best I could come up with in a couple of minutes. This particular binding gives you access to an instance variable of a class without setting up an attribute accessor. Also, since eval() is used, dynamic construction of objects opens up a little world of possibilites. If the EMPLOYEES constant contained classes instead of strings, eval() could dynamically create new objects for those classes (or access methods of those classes) within another class (such as Thredup) with the use of a binding.

It's a bit mind-numbing to think about good use cases of bindings, but it's a nice addition for developer's toolbox. You never know when you might have to throw every tool in your toolbox at a problem to arrive at a solution - so every tool counts!

Blocks

I came across something that I thought was really interesting. A method can accept a block if you denote the parameter name with the '&' symbol. The block will come in as a Proc.new() object (almost identical to a lambda) and will therefore require the .call() method to invoke the block that was passed in. For once, I have a practical example of how valuable this is:

This cache_me method could be used as a tool to cache or retrieve (fetch) a metric based on a cache key and an expiration. The key and expiry could also be combined into one dynamic key if you want to add some syntactic sugar to your metric pie.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241602 2011-08-02T20:34:59Z 2013-10-08T16:13:10Z Tracing Delayed Job Errors in the Test Environment We use delayed job a lot.  Typically thredUP.com has 8 workers chewing through background tasks.  One of the annoying things we have run into is in our test suite we have had to write custom cucumber steps and have to drop in debug lines when a delayed job fails so we can peer into the queue easily.  This was a pain in the ass cause it usually means re-running the tests that are failing.

Recently we added an initializer to push delayed job errors to Hoptoad (aka Airbrake):

This works great for running down bugs in production but doesn't solve the test environment problem - we aren't about to start pushing test errors to Hoptoad.  After using this for the last few weeks a simple thought occurred to me that made me feel like a huge idiot. Just drop in some awesome_print output right there.   

Well, now we have nice and pretty delayed_job output for the test environment and it's a little scary, there are failures we didn't even know about!
]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241603 2011-07-30T05:39:00Z 2013-10-08T16:13:10Z Improving Your TextMate Proficiency

TextMate is great, but on more than one occasion I've come very close from going 'Office Space' on my computer when TextMate thought it would be fun to beach ball and stop responding. Over time, much thanks to my CTO, I've found some ways to make my TextMate experience faster.

PeepOpen

Pressing (Command + T) in TextMate brings up the 'Go To File' window. The suckiness of this feature is two fold. First, if you have a large project (thredUP has over 2,300 files in our project) the 'Go To File' window will hang frozen for about a year. Second, if you have multiple files similarly named then it's practically useless. Enter PeepOpen ($12).

PeepOpen is awesome for a number of reasons. For one, PeepOpen keeps a persistent index of your project files so lookup is incredibly fast. Also, through some sort of magic, you are no longer limited to just typing file names to find a file. If I were looking to load the './app/controllers/admin/users_controller.rb' file, I could type 'admin users' and the results would be dramatically narrowed. You can type any part of the path and it will match against the index.

Also try: (Command + Shift + T) brings up a window to find a method within a file. Especially handy in large models.

Project Searching

If you're searching within a file, TextMate's built-in (Command + F) is fine. If you want to search multiple files such as a folder or the entire project, don't bother pressing (Shift + Command + F). It will take forever to find anything. Luckily there are multiple free and easy to install alternatives.

The first alternative is AckMate. I think it's the most widely used search plug-in. I didn't like it. First off, it doesn't support .haml files out of the gate. So I use a much less known plug-in called NiceFind. It's simple to install (double-click the NiceFind.tmplugin file after download) and it's very fast. It also allows you to select a directory in TextMate and press (Command + Shift + T) to only recursively search in that directory.

If I'm doing a costly search such as searching through some logs, I always default to the good 'ole grep command. However, sometimes you're on the command line and outside of TextMate and you want to search for files regardless of where they are stored in your app. If that's the case, then project_search is going to be your buddy. For example, I could do 'script/find helper "def emit_"' and it knows to only search helper files for that method.

Snippets

My favorite part of TextMate. If you aren't aware of TextMate Snippets, open TextMate and go to Bundles -> Bundle Editor -> Show Bundle Editor and explore all your different options. They are broken up by file detection, which I believe is done by your bundles. So your models might be under 'Ruby' or 'Ruby on Rails' and your views might be under 'Ruby Haml' if you have a separate bundle for HAML. I could list some good ones, but this cheat sheet is all you really need.

You can also go one step further and create your own snippets. While you're in the Bundle editor, select the bundle you want to add a snippet for and then click the '+' button in the bottom left. One of the ones I made is for Haml. Very commonly there is a JavaScript file I want to include for a page that also requires a jQuery DOM Ready setup call. So I created this snippet:

For the Activation field, I selected 'Key Equivalent' and put in the three letters 'cfj'. What this means is if I'm in a Haml file and I type cfj and press the tab key (which is usually how most snippets are activated), the following code is output for me:

The $1 variable set in the snippet is a way to autocomplete portions of the text. So after you press tab, the cursor will be waiting in the open quotes on the JavaScript include tag. Whatever I type there will also go before the .setup(); function call as well.

ReMate

Lastly, if you have a really large project, ReMate (free) is a must. Have you ever selected TextMate and it just hangs there? It's probably because TextMate is refreshing the entire project tree. If you install ReMate, you can disable this feature. After you install ReMate, go to Window -> Disable Refresh on Regaining Focus. That simple. You can toggle this option if you want this on or not and if you want to manually refresh your project tree, right click in the folder pane and select 'Refresh All Projects'.

If you know of good ways to speed up (maybe speedUP?) your TextMate usage, I'd love to hear about them in the comments below.

]]>
thredUP Engineering
tag:labs.thredup.com,2013:Post/241604 2011-07-27T01:47:00Z 2013-10-08T16:13:10Z A less permanent rails console

In a few of the previous places I've worked, it was common practice to use transactions whenever we needed to run queries on production environments.  What this does is give you the chance to verify that what you did was "correct".  In mysql, entering transaction mode is pretty straight forward:

Even when working in a developer environment, it's sometimes useful to have the ability to undo your work, so that you don't taint your data (which often takes a long time to reload if it's derived from a subset of production data).  

Here at thredUP, I tend to use rails console for prototyping a lot of the code I write (I know, I know, that's what tests are for) to validate that it's working as expected.  Recently, I ran into a situation where I needed to perform a destructive operation over a subset of data.  It becomes difficult to iteratively tweak or improve such code when the subset you're operating on is no longer there!  So I discovered a sweet flag when starting rails console that allows you to essentially make your entire session a transaction.

So now you can do any anything to your data and undo it by simply exiting rails console.  It appears this is done behind the scenes (watch the logs, to see how) by enclosing any queries called inside a begin/rollback block in mysql.  Interestingly, when you perform a save operation, rails saves the state prior to the write operation and then reverts back to it after the write.

For example, the following commands run on the console (in sandbox mode):

Produces the following log entries on the development log:

Notice the 2nd and 4th lines in the log, where it's saving its spot before the db write and restoring it after, respectively.  Finally, when you exit out of rails console, rails issues a ROLLBACK on the entire session.  You're back to where you started, which can sometimes be a godsend (especially in production :)).

]]>
thredUP Engineering