Foursquare users are able to publish a tip about any venue in the Foursquare database by using either a mobile device that supports the Foursquare app or a computer. Tips may be submitted prior to, during, or even well after a check in has taken place. Thus, it is possible for users to leave tips even if they have never actually visited or checked in to a venue. Of course, this opens the door for irrelevant or malignant user feedback; in some cases, outright spam. For this reason, every published tip presents users with a one-click option to Report mischief in any form. Of course, users are also able to Save and Like tips if they desire.
The app encourages users to contribute tips to the Foursquare community in a few ways. After checking in, but not always, the app will present the user with a popular tip (see screenshot above; “1st time here! Here’s a popular tip”). Presumably (and based on experience), the app will promote a tip from the user’s friend network if there is one. Soon after checking in to a venue, the app will ask the user if they would like to leave a tip for that venue. Again, if a user is exploring venues via the app or the website, they are prompted to leave a tip at that time. Aside from making user lists, publishing tips is the primary way to make a contribution to the growing body of user-generated knowledge about venues. Users are restricted to 200 characters but, like Twitter, can include URLs that can direct others to additional, web-hosted information (Chris Thompson on May 24th, 2010, What is a tip?). Because of the fundamentally public nature of tips, they lend themselves to data collection; especially via computer.
Even with a shortlist of standout coffeehouses from the Foursquare best of Phoenix page, I still had a collection of nearly 500 tips. In examining preliminary statistics regarding the top-ten list, clear division between the “top” five and “bottom” five emerged: the top five scored 9.0 and above on the Foursquare 10-point rating system; three of the bottom five had more than one location which challenged, and the bottom five also had lower total check-ins and a lower ratio of users to user check-ins as compared to those top five, which boasted super-high Foursquare ratings, single locations (i.e., personality), inclusion in more user-generated lists, more user-uploaded photographs, etc.
Among the top five coffeehouses, there were a total of 346 user-generated tips. All quotes from these coffeehouses — ranging from summer 2009 through April 20, 2013 — were included for analysis in this study. They were collected using a desktop computer to browse individual venue pages so that each page of tips could be selected, copied and pasted. Data was first pasted in Microsoft OneNote to strip formatting and then copied and pasted over to Microsoft Excel. Initial collection included preserving the date the tips was submitted, user information on who submitted the tip, the number of likes awarded to each tip, and the tip itself in its entirety. This entire base set (minus usernames) is embedded in the spreadsheet below and it was used to derive second-level data, such as individual word count per tip, average word count for tips of the same coffeehouse, and, through discourse analysis, a breakdown of the broad topics associated with each tip.
Getting a Flavor for the Feedback
I developed broad categories after an informal review of all tips and created columns in Excel to code them. With 346 tips and five categories, there were now potentially 1,730 “tip elements” to be considered. However, on average, each tip accounted for 1.4 tip elements and this only increased the overall total of tip elements to 488.
Figure 1 shows that the conversion of tips to tip elements had little to no effect on the distribution of tips among the coffeehouses. Additionally, the division of tip elements validates that each accounts for a substantial amount of consideration (approximately 20-30%) and that no fringe categories were under consideration.
With my major categories identified and after properly coding the modified data set of 488 tip elements in Excel, I then generated a pivot table to examine how these variations could be organized to present one or more findings. Pivot tables allow data to be stacked, arranged, filtered, and rearranged hierarchically to tease out significant relationships. I prepared a table to look at how each of the major categories were represented by user feedback for each coffeehouse (Figure 2). However, the significance of these relationships among categories between coffeehouses could not be validated without recalculating the distribution of categories accordingly (Figure 3). For example, we see that tips regarding coffee at Lola Coffee account for 29 of the 44 tips and that this fewer than the 42 times coffee was referenced for Lux Central but more than the times it was referenced for the other three coffeehouses; specifically: Copper Star Coffee (13 references), Giant Coffee (23 references) and Jobot Coffee (26 references). Once these tips are recalculated relative to all feedback within each venue, we come to observe that tips regarding coffee at Lola Coffee account for nearly half of all tips for that venue (48%), which is about double the feedback for the same category at all of the other coffeehouses, which are now relatively similar, ranging from 20-29% feedback relative to other categories.
Significant differences emerged and yet I reminded myself that the range of user feedback varied considerably
|Copper Star Coffee||902||4,230||24||48||71|