Translation Crowdsourcing: Motivating Volunteer Translators


Written on May 19, 2017 – 10:20 pm | by Emily Taylor

Earlier this year, I wrote about recruiting volunteer translators for a localization project to benefit a nonprofit organization as part of my practicum project. My team, Loc&Key, partnered with Room to Read, a nonprofit based in San Francisco that focuses on increasing education opportunities for underprivileged children around the world.

From our very first meeting, my team was faced with a big question: How could we keep our volunteers motivated? I’ve written before about problems that can arise with crowdsourcing, specifically translation crowdsourcing. Loc&Key had it a bit easier, as we were carrying out this project for a nonprofit. This automatically gave our volunteers intrinsic motivation. After all, there’s nothing sexier than saving the world!

However, my team wanted to ensure that we not only acknowledged our volunteers but also gave something back in whatever way we could. We came up with a novel method for thanking our volunteers: call outs on our own website.

Featured Volunteers

We wrote a series of blog posts called “Featured Volunteers” detailing the type and extent of work some of our volunteers did for us. These posts acted as detailed recommendations for the volunteers who went above and beyond in their roles. Our hope is that these volunteers can link to their write-ups on a portfolio or a LinkedIn profile, for example. In fact, Kaya, who acted as our Terminology Manager, has done exactly that, as seen in this section of her LinkedIn:

Youtube Credits

We recruited nearly 30 volunteer translators through ProZ and TranslatorsCafé to help us subtitle nine of Room to Read’s Youtube videos. After having their translations reviewed, we wrote a recommendation for each volunteer on their forum of choice. However, we wanted to do something more.

We came up with the idea to credit the translators and reviewers right in the Youtube video description. We were already going to be granted access to our client’s Youtube channel to upload the subtitle files. During a meeting with our contact at Room to Read, we asked for permission to edit the descriptions to add the names of the volunteers who help with the subtitling project, and she got back to us a few days later with a yes!

I’m particularly proud of this method of volunteer recognition; it’s so official and visible!

Thank You Video

My team also asked our contact, Sarah, to record a video thanking our volunteers, and she agreed! We helped by writing and emailing her a sample script she could base the video on. In the video, Sarah thanks every one of our more than 40 volunteers by name. She describes each project we carried out for Room to Read and the impact they have had or will have.

“Our volunteers were based in over 10 countries and helped us subtitle 13 of Room to Read’s most popular videos into six languages. In total, our volunteers translated over 10,000 words. Thanks to your efforts, we were able to reach new audiences across the globe…”

We shared this video with all of our volunteers to show them that their work is valued and appreciated.

Conclusion

Looking back on this project, the aspect that I remember most is the people. I had the pleasure of working with some amazing, dedicated linguists, and I wanted to do everything I could to make the project as fun for them as it was for me. In addition to being a good project manager – being responsive, flexible, and accommodating –  I ensured that every linguist who worked for us walked away from the experience in a better position than when they came to us. Being an effective manager of translation crowdsourcing is more than just getting words on the page translated. It’s about building and strengthening human connections.

Translation Crowdsourcing: Recruiting and Managing Volunteer Linguists for Room to Read


Written on March 17, 2017 – 3:49 pm | by Emily Taylor

Loc&Key

Loc&Key is a student-drive translation and localization company based in Monterey, California, created for the Localization Practicum course of the Middlebury Institute of International Studies at Monterey (MIIS). The five project managers and two honorary employees (our beloved CCO and CFO) of this small translation company have combined their diverse skill sets to work together on localization projects that will act as a capstone for our graduation education. Loc&Key has partnered with Room to Read to offer translation, subtitling, terminology management, style guide creation, and translation crowdsourcing consulting services.

Room to Read

Room to Read is a San Francisco based nonprofit organization that “seeks to transform the lives of millions of children in developing countries by focusing on literacy and gender equality in education.” They reached out to Loc&Key to have their Global Monitoring Report localized from English to Japanese. During our project kick-off meeting, Loc&Key also offered to subtitle the videos on Room to Read’s Youtube channel into Japanese and the other MIIS languages (Spanish, French, German, Russian, Portuguese, Chinese, and Korean). The representatives we spoke to were excited to hear that we could offer this service without requiring the source videos, thanks to our use of the subtitling platform Amara.

Challenges

Finding Translators

While many localization practicum projects use MIIS students to translate, we had a problem: there weren’t enough MIIS Japanese translation students to handle the 9,000 word Global Monitoring Report. We had to look elsewhere for volunteers.

When I lived in Kumamoto, Japan, I had participated in an English to Japanese translation club called Enjoy Translation. I contacted the organizer, Shino, and told her about our project and about Room to Read. She was enthusiastic about the cause, and when she pitched the idea at the next club meeting, she recruited seven additional volunteer translators for us!

Managing Volunteers

Shino created a private Facebook group called Room to Share that would function as our translation management platform.

Through Room to Share, the volunteers downloaded and uploaded files, claimed sections of the translation, and posted and responded to terminology inquiries. Because the content we were translating was readily available on the Internet, we didn’t need to worry about confidentiality issues. This allowed us to use Facebook as a platform to accommodate our volunteers, many of whom are unfamiliar with translation technology.

Finding Reviewers

While our volunteers in Japan were working hard to translate the Global Monitoring Report, I focused on recruiting MIIS students as volunteers to edit the translation. I contacted the Japanese translation professor, Hideko Russell, and received permission to use 15 minutes of class time to pitch our project.

Using the above Powerpoint presentation (shown here as a GIF), I introduced Loc&Key and gave an overview of the work Room to Read does by showing this video, my favorite from their Youtube channel:

I concluded with examples of how we plan to reward our volunteers, such as recommendations on LinkedIn and shout outs on our website. Indeed, I had already written a post about Kaya, a second year student of Translation & Interpretation who had been helping me answer terminology questions from our volunteers in Japan.

The presentation was a huge success, as every student in the Japanese program is now on board and eager to help!

Making Progress

Currently, our volunteers in Japan have finished translation the Global Monitoring Report, and I have passed the baton to Mary and Shiori, Translation & Interpretation students here at MIIS who are also native speakers of Japanese. They have offered to devote part of their spring break to editing the translation, after which my team and I will be able to move onto importing the Japanese text into the InDesign file to create a localized Japanese version of the PDF.

Lessons Learned

My work with Shino, Kaya, and our other volunteers has shown me how time-intensive it can be to manage a group of translators, as I acted as an in-between to field daily questions, but also how smoothly the process can go with a platform that matches the volunteers’ habits (in this case, Facebook). Meanwhile, Loc&Key uses Asana as our project management tool, a tool most of us are unfamiliar with and thus have not used as effectively as we could be. Sometimes, simple truly is best!

Tags: , , , , ,

Localizing Unreal Match 3


Written on December 15, 2016 – 3:22 pm | by Emily Taylor

We spent two weeks in Software & Games Localization learning how to localize Unity games, so for the final project my group decided to go off the beaten path and try localizing a game using the Unreal Engine. We chose to localize Unreal Match 3, a match 3 type game made using the Unreal Engine.

I could not find much documentation about localizing games using the Unreal Engine, but our professor linked this forum post as optional post-lecture material following the final Unity module, and my group members and I sat down together and went through it step-by-step.

Thanks to the Unreal Engine localization dashboard, localizing the game was straightforward. The localization dashboard gave us the option of adding translations within the dashboard, but we decided to export all translatable text into .po files. We imported these .po files into memoQ for translation. This made translation easier, as there was a surprisingly large amount to translate, and we felt it more accurately reflected the workflow of an actual localization project. We chose to localize into Japanese, German, and Russian. We used Google Translate for the translation, as our focus was on localization rather than translation, so please excuse the machine translation in the screenshots below.

Russian is notorious for causing expansion issues, but thankfully almost everything came out looking quite nice:

The only major expansion issue we ran into was the Russian text for “Bomb Power,” which spilled over the frame:

1

This frame was an image, as opposed to a button that could be easily resized, and it had been made outside of the Unreal Engine, so to solve this problem I exported the image as a .bmp file and then opened it in Photoshop to do some tweaking. I stretched the power bar out so that it was big enough to account for the Russian text and then cleaned the image up to hide any traces of image manipulation.

Before:

bombpowerframe

After:

stretchedbombpowerframe

I saved the new image and reimported it to the Unreal Engine, and… success!

All of the “Bomb Power” text in Russian is now visible, whereas before quite a bit had been cut off, and it fits comfortably within the frame. (Perhaps the frame is even a tad too big now!)

Here is what the games looks like in Japanese:

And German:

At this point, we were finished localizing the game. The language displayed would reflect the language of the smartphone of the user, or default to English if other than German, Japanese, or Russian. But we wanted to take it a step further and try adding language selection as an option to the menu.

As mobile games are not usually designed to have language pickers, this turned out to be a harder task than we imagined, and we went through several iterations of the games before coming up with our final version, which still needs a bit of tweaking before we can say it is ready.

It looks great, and, most importantly, it allows the user to select the language, but the check boxes do not work as well as intended. Instead of a check or other type of mark appearing in the box, the box disappears when the user selects it. Some work is still needed here.

This project showed me that localizing a game using the Unreal Engine is fairly straightforward thanks to the built-in localization dashboard, but fixing problems that arise, like text expansion, or adding additional features, like a language picker, can be time consuming. As a localization manager, I would work with engineers to fix these issues, but actually trying to implement them myself through this project gave me a good perspective on the challenges we might face, something every good manager needs to be a leader and problem solver.

ATA Conference Takeaways


Written on November 12, 2016 – 4:34 pm | by Emily Taylor

From November 2 to November 5, I attended the 2016 ATA Conference in San Francisco.

The size and scale of the conference were like nothing I had ever experienced before. Rather than try to detail everything about the conference, I’m going to write up my key takeaways.

Go to the exhibit hall

This was actually the highlight of the conference for me! It was a chance to speak to recruiters and other company representatives one-on-one. The job fair was also worthwhile, of course, but it was crowded. Everyone had to wait in line at each booth, meaning there was no time to ask meaningful questions about the positions the company was looking to fill. At the exhibit hall, you had that luxury, especially if you went during a session.

Choose presentations based on the speaker

Don’t just choose sessions based on what you feel you “should” go to. If you know a session is going to be entertaining because you’ve heard good things about the speaker, give it a try, even if the topic isn’t strictly in your area of interest. Chances are, you’ll get more out of an entertaining session than a dry one that you feel like you “should” attend.

Don’t go to every session

Give yourself at least one break. If you attend every session, your brain will be overloaded and you won’t remember it all anyway! Build some downtime into your schedule. Go over what you’ve learned already while sipping coffee in the lounge, or stroll through the exhibit hall and chat with some companies.

Go to the network dinner

If your language division has a network dinner, sign up to attend, even if you feel it’s too pricey. The food isn’t the point – meeting people is. These are people who may refer clients to you in the future. An investment of $50 or however much the dinner costs is a no-brainer.

You are an expert!

If you are a MIIS student, remember: you are an expert! Don’t go into the ATA conference feeling inferior to the other attendees just because you are a student. These other attendees are your colleagues. And don’t skip out on sessions that are labeled “Intermediate” or “Advanced.” You’ll likely be surprised at how much you already know! I went to an “Intermediate” session and found that nearly all of the content had been covered in my translation courses here at MIIS. Wow!

 

The 2017 ATA Conference will be October 25-28 in Washington, DC. See you there!

The Intersection of MT and Crowdsourcing


Written on October 27, 2016 – 12:54 pm | by Emily Taylor

When I attended the Localization Unconference, one session I participated in discussed how MT and translation crowdsourcing could be used to complement each other.

One fascinating idea that was brought up was to use MT to check the accuracy of translation by the crowd. The translated output is run through MT and converted back into the source language to be checked by the PM, who speaks the source language but not the target language, or even by the original author of the source document.

This way, the accuracy (though not necessarily the style) of the crowd translation can be verified without the added expense of hiring professional reviewers.

I’ve designed a sample workflow to show how a TMS could make almost all of the process entirely automatic:

sample-crowdsourcing-tms-workflow

 

Using the Crowd to Automate QA


Written on October 3, 2016 – 6:11 am | by Emily Taylor

I’ve been thinking about QA recently. It’s the subject of this week’s Translation Management Systems class, and we’ve been assigned to watch a recording of a webinar on MQM (Multidimensional Quality Metrics). Furthermore, I attended the annual Silicon Valley localization unconference last Friday, and I thoroughly enjoyed the session about automated quality measurements. It had a lively (perhaps even heated) discussion.

MQM is a clever system that customizes 120+ QA issues to the user’s particular needs. It provides the user with a QA checklist tailored to their needs without having to actually evaluate all 120+ items, because while each issue is relevant to somebody out there, each one isn’t going to be relevant to every user. This is an efficient system, and I wish I’d known about it last semester when I was looking for a QA model to evaluate our Statistical Machine Translation engine. (We used LISA, which worked well enough, although some items weren’t particularly relevant.) But MQM still seems rather… manual. And since attending the unconference, I’ve been wondering: How can we make QA more automated?

Consider Ebay. They put out an extraordinary amount of machine translation, far too much to perform QA beyond sampling. But the problem with sampling is obvious. The randomly chosen sample could contain too few errors, or too many, giving an incorrect picture of the translation as a whole. But given that there is too much translation output to manually check more than a small sample, what other options would a company like Ebay have?

One solution would be to tap into its users as a form of QA crowdsourcing. Ebay already does this, but in an extremely limited way. Users can give an overall rating to the machine translated text by selecting a rating at the bottom of the page. It’s unclear how useful an overall rating would be in evaluating the quality of the machine translation on that page as a whole, and many users could mistakenly use it to rate how much they like the contents of the auction.

What I’m imagining is offering users of machine translated auctions a way to provide feedback much like Google Translate gives its users.

For example, take this Ebay auction:

1

I’ve copied and pasted the text into Google Translate and set it to output Japanese:

4

Of course, the Ebay MT engine would be much better at processing this kind of user-generated content. Notice that Google Translate gets caught up on the typo “willl,” something that Ebay’s engine would most likely translate without issue.

But here is where Google Translate excels:

3

I’ve highlighted the Japanese output for the word “only.” It is mistranslated here. The Japanese version says something along the lines of, instead of this being an auction for the bodice only, this auction is for the only bodice in existence. Since this is clearly a mass-produced costume, even a user who understands none of the source text would know that to be a mistranslation.

Thankfully, I can click on the oddly translated word and see additional options (and even an option to improve the translation myself!).

2016-10-02

The other options listed here are closer to the correct translation.

Now, imagine if Ebay had a similar user interface on its machine translated auction pages. They look like normal auction pages, but the words can be selected just like in Google Translate. This could not only provide a more accurate translation to the user, who could select words that don’t seem to fit and potentially see more fitting ones, but this feedback could also provide a form of QA for Ebay to see how well their MT engine is doing, and they could even feed the improvements back into its MT engine.

Of course, not all of the feedback would be useful. There would be plenty of crackpots who play with the feedback system by selecting random (or purposely incorrect) words. But Ebay could avoid this problem by setting a large threshold before any of the feedback gets fed into the MT engine. Their large user base would allow them that luxury.

Machine translation is the perfect solution (indeed, the only solution in most cases) for user-generated content. However, given the unrestricted text we see in these cases, the quality of output still tends to be fairly low, and the quantity of output is far too large to do more than just a very small sample size when it comes to QA checks. But there are vast improvements that can be made if companies like Ebay or Facebook were to automate feedback from users in order to constantly evaluate the quality of their machine translation output and even to improve their machine translation engines.

TMS in the Cloud: The Great Equalizer


Written on September 11, 2016 – 12:24 pm | by Emily Taylor

cl1

“Today, there is no excuse,” writes Alison Toot of Smartling in her column appearing in the January/February 2015 volume of MultiLingual. She is referring to the prevalence and overwhelming benefits of cloud-based translation management technology. Indeed, there seems to be no reason why a language service provider of any size should still be using a desktop-based management system. Sure, there are some possible disadvantages, as Lee Densmer writes in the Moravia blog, but these are not only minor, they are likely to be resolved as the technology continues to evolve.

Last week, I had the privilege of attending a talk by Anna Schlegel here at MIIS. Anna is the co-founder of Women in Localization, an association dedicated to helping women advance their careers in the field of localization. Her presentation was mostly focused on how and why companies go global, but the topic of women in localization arose as well. Anna lamented the fact that women still don’t seem to have an equal footing in the world of localization. Specifically, she noticed male colleagues being promoted higher and more frequently than their female counterparts.

Alison’s column on translation management technology in the cloud and Anna’s observations about women in localization resonated with my experiences from my recent summer internship. I interned at the Department of State’s Office of Language Services as a translation manager. Language Services uses LSMS, a proprietary translation management system based on and hosted by Plunet. The senior project manager in our office, Rachel, used this to her advantage when she pioneered a teleworking program that eventually allowed her to perform her project management duties while working at home four days out of the week. As a dedicated mother of four, she now has more time to balance her family and work lives, and she recently moved to allow her children to attend school in the district of her choice. Without the option of teleworking four out of five days, the one-hour commute each way would likely have been too prohibitive.

Rachel is a role model for me; she is smart and career-driven (and also a MIIS TLM graduate!) but also fully focused on her family life. With the massive strides the world of localization has taken that Alison writes about, it is clear that this field will become more and more accessible for women. Gone are the days when we had to choose between a career and a family.

Alison is right – we indeed live in a cloud-based world. And we women have a lot to gain from it.

What is the Future of Translation Technology?


Written on May 9, 2016 – 10:11 pm | by Emily Taylor

RTEmagicC_Machine_Translation__1_.jpg

Nothing seems to be as divisive a topic among translators as machine translation (MT). As the quality of MT improves, will it eventually displace human translators? (Spoiler alert: no, it will not.)

We will continue to see improvements in MT, but it will never replace human translation. We will continue to improve MT engines, but at some point, improvements in quality will begin to level off. MT will never be perfect.

So should we give up improving MT and cut all funding? Absolutely not! 

I believe the future of translation technology is not machine translation, or human translation, but rather – machines and humans working together.

Of course, we see this already in the use of CAT tools. But in the sense that machines are working for humans when translators use CAT tools, in the future, this relationship will develop into more of a partnership.

A few companies are offering Post-Editing Machine Translation (PEMT) training and certifications. SDL is one of them. We will see more and more of these, and I predict that for many fields, having PEMT certification will be a requirement for translators, both freelance and in-house.

Sin-título

The debate over the future of translation technology seems to have focused on whether or not MT will ever be as good as human translation. However, I do not see why we need to ask MT to be as good as a human translator. Machines and humans will perform better when working together.

The real future of translation technology? We should continue to improve machine translation while also focusing on creating curriculums to train translators to post-edit machine translated output.

Translation Crowdsourcing: Only Nonprofits Need Apply?


Written on April 17, 2016 – 8:24 am | by Emily Taylor

1280px-LinkedIn_Logo.svg

Seven years ago, LinkedIn made an ill-fated attempt to venture into translation crowdsourcing, a tool many businesses are turning to as a way to reduce the costs, increase the speed, or to expand the scope of their translation projects. LinkedIn sent a survey to 12,000 of its members who had labeled themselves as professionals in the translation field to gauge their interest in volunteer translation.

The LinkedIn group that sprang into existence as a result captures the outrage perfectly: “Translators Against Crowdsourcing for Commercial Business.” LinkedIn is a for-profit company, so many of the professional translators who received the survey felt miffed, or even angry, that it was attempting to solicit translations “for free.”

We have seen similar backlash before.

Gaps-new-logo-006

In 2010, Gap responded via Facebook to overwhelming criticism following the unveiling of its new logo (right) with:

Thanks for everyone’s input on the new logo! We’ve had the same logo for 20+ years, and this is just one of the things we’re changing. We know this logo created a lot of buzz and we’re thrilled to see passionate debates unfolding! So much so we’re asking you to share your designs. We love our version, but we’d like to see other ideas. Stay tuned for details in the next few days on this crowd sourcing project.

Gap attempted to pose the proposed crowdsourcing project as a community activity – a way to engage its fans and demonstrate that the company cares. Designers, however, viewed it as exploitation, and Gap became a punching bag on Facebook and Twitter. Days later, the company admitted its mistake, and the crowdsourcing project never saw the light of day.

ap_facebook_dislike_kb_141212_31x13_1600

Even Facebook, whose innovative translation crowdsourcing project is now often cited as a model of success, faced opposition. In fact, the backlash led to the formation of a Facebook group called “Leave Translation to Translators!” protesting the use of volunteer translation.

I have written about Google Translate’s crowdsourcing before as well; it is an excellent model that makes translating fun and addictive. And yet even Google is not immune to crowdsourcing gaffs.

Meanwhile, nonprofits seem to fare extremely well with crowdsourcing, including translation crowdsourcing.

Duolingo, Mozilla, Wikipedia, TED – all have attracted professional translators to volunteer their time for free. Why? For the greater good: to make the Internet and its content more available to people across the globe. Translators do not need to question the motives (and ethics) of nonprofits who call for volunteers. It’s fun –  and sexy – to make the world a better place by volunteering for nonprofits.

That is great for nonprofits, but the sentiment among many seems to be: Companies that make a profit should pay for translations.

What’s the takeaway here? Certainly, for-profit companies such as LinkedIn may face opposition from their communities when they request what is seen as, well, translation work for free. However, I believe that they can be just as successful as non-profits, so long as they accept that not everyone will be responsive to the call to volunteer, and, most importantly, keep in mind the motivations of their members.

What is the motivation of members, specifically translation professional, on LinkedIn? For some, to find work. For others, to network.

Great Job gold star

How cool would be it if LinkedIn gave your profile a badge indicating that you had helped translate its content?

It would be a resume boost for some, especially those just starting out with no work experience. Many employers love to see candidates that have relevant volunteer experience, and for positions related to localization crowdsourcing, they might even require it. An official badge on your LinkedIn page would be a gold star for these candidates. And of course, it would be a way to connect to others. If I were viewing the LinkedIn page of a new acquaintance after, say, a networking event, and I saw that she also has a LinkedIn volunteer badge, that is another conversation point I can use next time I meet her.

LinkedIn has the potential to be just as successful in translation crowdsourcing as Facebook is, so long as they do it in a way that benefits its community. While some may cry foul since LinkedIn is a for-profit company, let’s face it: translation crowdsourcing is no longer uniquely the realm of nonprofits. LinkedIn, my advice to you is this: you can be the next big translation crowdsourcing success, but always remember your community. They want to feel important and appreciated. Make sure they feel both. And don’t mind the haters.

 

Google Translate Crowdsourcing


Written on March 26, 2016 – 12:13 pm | by Emily Taylor

I became interested in Google Translate’s crowdsourcing activities when the Google Translate Community Program Manager (who is also a MIIS grad!) came to campus to give a lecture and demonstration on the Google Translate Community.

1

When you go to translate.google.com, it’s hard to ignore the small banner at the bottom of the page beckoning you to participate. You could be the one watering those plants, helping Google Translate grow!

When you click the “JOIN THE TRANSLATE COMMUNITY” link…

3

You are taken to this sleek dashboard. Everything is colorful and simple. You can see your stats on the right, and you have two options: translate or validate.

Let’s validate!

5

Given the source text, we see one or more translations. We have to choose the check mark or the x mark for each one.

Note that Google Translate doesn’t offer any instructions to determine which are correct. While it’s nice to be able to jump right in without having to wade through a style guide, I find myself wondering if sentences that are translated correctly but with punctuation errors deserve a check mark. I arbitrarily decide that missing a final period is okay, but missing an apostrophe (eg, youre instead of you’re) is not. I can’t help wondering what sort of  personal rules other users have made.

4

What sort of quality should I be looking for? Flawless human translation quality? Or translations that correctly convey the meaning? Google Translate doesn’t tell me, so I’m on my own again.

Here, most of these get the point across but fail to reflect the politeness in the original Japanese. I choose the one closest, despite its lack of capitalization and punctuation.

6

The translations to validate come in bite-sized sets of ten.

What’s that? I’m only one validation away from leveling up? Well… I really should be getting some other things done, but how could I not do one more set?

7

Some of these options are frustratingly close to a good translation (“I have hay fever.”), but I give them an X for sounding a bit off. I wish there was an option to suggest a better translation.

8

I level up, and a bright yellow character does a quick dance on the screen to celebrate. How rewarding!

9

I’m now at Level 10, and I think that’s enough translation validating for today. How about some actual translating?

10

The translating step is similar – simple and quick. But you don’t get any extra “points” for translating, and validation is faster, so I find myself just wanting to validate other users’ translations.

2

Now whenever I visit translate.google.com, I see my progress, along with a thank you and an encouragement to keep going.


Google Translate has clearly designed a clever crowdsourcing platform.

  • Its sleek and simple interface allows users to start working right away – no learning curve, and no need to slog through a list of rules or through style guides.
  • Since the validations and translations come in bite-sized chunks – no more than a single sentence, and often just a phrase – it’s easy to keep going and going.
  • Leveling up is fun and addictive. As far as I can tell, the number is meaningless… but I still wanted to get it higher!

Clearly, this system will lend itself to a high quantity of translations and validations, which is exactly what Google Translate needs to continuously improve its machine translation. There were, however, a few aspects that made me less inclined to participate.

  • It’s a lot easier to validate than to translate, but both activities are equal in point value. I wanted to actually translate, but my desire to level up kept me in validation mode most of the time. Of course, this is certainly entirely by design. While Google Translate doesn’t publicize how many validations a translation needs to be approved, I’m sure it’s quite a few, perhaps even 100 or more. They need more activity in the validation stage than the translation stage, so it wouldn’t make sense to reward translation activity more, even though it requires more time and effort.
  • Overall, the entire process is opaque. While the  lack of information and guidance makes for a nice, clean interface, personally, I like to know I’m making a difference. And I found myself wondering – am I making a difference? I wish Google Translate could give me some indication of how or how much I’m helping.
  • Despite this being a crowdsourcing effort, I didn’t feel like part of a crowd. In fact, I felt rather lonely. There was no leaderboard or any other way to see other users’ activities, and no forums or way to get to know them. It was fun leveling up, but I’d like to be able to show off my points and have a way to compare my progress to that of others.

Google Translate’s crowdsourcing platform works wonderfully for achieving what they require – a high quantity of translations and an even higher quantity of validations to constantly improve its machine translation. Personally, I prefer to volunteer my translation services to causes that let me see how I am making a difference and where I can truly feel like part of a community, so the Google Translate Community may not be for me.

Google Translate may be sacrificing some quality by excluding translators like me, but they have made the correct choice (quantity over quality) for meeting their translation crowdsourcing goals.