Originally posted on Inside AdMob blog
Posted by Chris Jones, Social Team, AdMob. Native is the next big thing in mobile advertising with spending on native ads expected to grow to $21 billion in 2018. This is a huge potential opportunity for app developers, but how can you use native ads to help boost your UX and monetize your app? Here’s a quick overview of what native advertising is, and how you can get started. What’s Native? Native advertisements match both the form and function of the user experience in which they’re placed. They also match the visual design of the app they live within. Native ads augment the user experience by providing value through relevant ads that flow within the context of surrounding app content. Put simply; native ads fit in. Native advertising isn’t a new thing. Since the golden days of wireless radio and daily newspapers, advertisers have looked for innovative ways to match up their brands and messages with the environment in which they’re served to consumers. In the digital advertising world, Google was one of the first “native” advertisers, developing search ads that directly matched the information on the search results page. But today, consumers are everywhere. And we’ve had to adapt; delivering higher quality content that can flex to different screens and sizes. For example, mobile-optimized websites now have big buttons and fonts and mobile apps let users scroll up and down or left and right, rather than having to click through to the pages they want to view. Native also understands that preserving that user experience is vital to successful advertising. Our content has evolved, and our ads need to follow. Nobody likes their app experience to be side-swiped by an obtrusive, ugly ad. Native advertising offers a simple solution: ads that fit the form and function of a developer’s content. We help you create ads that are beautiful and engaging, so consumers can maintain their good buzz. Native ads are cohesive. They never stand out like a sore thumb. They’re made to match the look and feel of the app, and are consistent with platform behavior, so that the viewer feels like the ad fits seamlessly with their content experience. In other words - they’re ads with UX in mind. Get started - In 5 easy steps:
Until next time, be sure to stay connected on all things AdMob by following our Twitter, LinkedIn and Google+ pages. via Google Developers Blog http://developers.googleblog.com/2016/10/the-native-way-everything-you-need-to-know-about-native-ads.html
2 Comments
Posted by Israel Shalom, Product Manager
Here at Google, we’re serving more than a hundred APIs to ensure that developers have the resources to build amazing experiences with them. We provide a reliable infrastructure and make it as simple as possible so developers can focus on building the future. With this in mind, we’re introducing a few improvements for the API experience: more flexible keys, a streamlined 'getting-started' experience, and easy monitoring. Faster, more flexible key generationKeys are a standard way for APIs to identify callers, and one of the very first steps in interacting with a Google API. Tens of thousands of keys are created every day for Google APIs, so we’re making this step simpler -- reducing the old multi-step process with a single click: You no longer need to choose your platform and various other restrictions at the time of creation, but we still encourage scope managementas a best practice: Streamlined getting started flowWe realize that many developers want to get straight to creation and don’t necessarily want to step into the console. We’ve just introduced an in-flow credential set up procedure directly embedded within the developer documentation: Click the 'Get a Key' button, choose or create a project, and then let us take care of enabling the API and creating a key. We are currently rolling this out for the Google Maps APIs and over the next few months we'll bring it to the rest of our documentation. API DashboardWe’re not just making it easier to get started, we’re simplifying the on-going usage experience, too. For developers who use one or more APIs frequently, we've built the new API Dashboard to easily view usage and quotas. If you’ve enabled any APIs, the dashboard is front and center in the API Console. There you can view all the APIs you’re using along with usage, error and latency data: Clicking on an API will jump to a detailed report, where you’ll see the traffic sliced by methods, credentials, versions and response code (available on select APIs): We hope these new features make your API usage easier, and we can't wait to see what you’re going to build next! via Google Developers Blog http://developers.googleblog.com/2016/10/key-improvements-for-your-api-experience.html
Below is what happened in search today, as reported on Search Engine Land and from other places across the web. The post SearchCap: Offline call data, AdWords budget creep & AMP fire hose appeared first on Search Engine Land.
Please visit Search Engine Land for the full article. via Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://searchengineland.com/searchcap-offline-call-data-adwords-budget-creep-amp-fire-hose-260276
Company generated increase in quality calls and lowered customer acquisition costs. The post Allstate: offline call data improved SEM performance and the customer experience appeared first on Search Engine Land.
Please visit Search Engine Land for the full article. via Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://searchengineland.com/allstate-offline-call-data-improved-sem-performance-customer-experience-260229
Are you feeling spooked by a mysterious rise in ad spend? Columnist Pauline Jakober takes a look at what you can do in AdWords to solve the mystery. The post 3 mysterious and scary ways AdWords budget creep can happen to you appeared first on Search Engine Land.
Please visit Search Engine Land for the full article. via Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://searchengineland.com/3-mysterious-scary-ways-suffer-adwords-budget-creep-259736
How has the rise of mobile changed the way people view Google SERPs? Contributor Kristi Kellogg summarizes a session from SMX East in which Mediative's Chris Pinkerton discusses the results of eye-tracking studies. The post How mobile has changed the way we search, based on 10+ years of...
Please visit Search Engine Land for the full article. via Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://searchengineland.com/mobile-impacted-way-search-based-10-years-eye-tracking-studies-260052
Columnist Barb Palser believes that the broad surfacing of AMP content in mobile search will expose a universe of AMP content that’s been hidden from view. The post Google opens the AMP fire hose appeared first on Search Engine Land.
Please visit Search Engine Land for the full article. via Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://searchengineland.com/google-opens-amp-firehose-259569 Most SEOs Are No Better than a Coin-Flip at Predicting Which Page Will Rank Better. Can You?10/3/2016 Posted by willcritchlow We want to be able to answer questions about why one page outranks another.“What would we have to do to outrank that site?” “Why is our competitor outranking us on this search?” These kind of questions — from bosses, from clients, and from prospective clients — are a standard part of day-to-day life for many SEOs. I know I’ve been asked both in the last week. It’s relatively easy to figure out ways that a page can be made more relevant and compelling for a given search, and it’s straightforward to think of ways the page or site could be more authoritative (even if it’s less straight-forward to get it done). But will those changes or that extra link cause an actual reordering of a specific ranking? That’s a very hard question to answer with a high degree of certainty. When we asked a few hundred people to pick which of two pages would rank better for a range of keywords, the average accuracy on UK SERPs was 46%. That’s worse than you’d get if you just flipped a coin! This chart shows the performance by keyword. It’s pretty abysmal: It’s getting harder to unpick all the ranking factorsI’ve participated in each iteration of Moz’s ranking factors survey since its inception in 2009. At one of our recent conferences (the last time I was in San Diego for SearchLove) I talked about how I used to enjoy it and feel like I could add real value by taking the survey, but how that's changed over the years as the complexity has increased. While I remain confident when building strategies to increase overall organic visibility, traffic, and revenue, I’m less sure than ever which individual ranking factors will outweigh which others in a specific case. The strategic approach looks at whole sites and groups of keywordsMy approach is generally to zoom out and build business cases on assumptions about portfolios of rankings, but it’s been on my mind recently as I think about the ways machine learning should make Google rankings ever more of a black box, and cause the ranking factors to vary more and more between niches. In general, "why does this page rank?" is the same as "which of these two pages will rank better?"I've been teaching myself about deep neural networks using TensorFlow and Keras — an area I’m pretty sure I’d have ended up studying and working in if I’d gone to college 5 years later. As I did so, I started thinking about how you would model a SERP (which is a set of high-dimensional non-linear relationships). I realized that the litmus test of understanding ranking factors — and thus being able to answer “why does that page outrank us?” — boils down to being able to answer a simpler question: Given two pages, can you figure out which one will outrank the other for a given query?If you can answer that in the general case, then you know why one page outranks another, and vice-versa. It turns out that people are terrible at answering this question.I thought that answering this with greater accuracy than a coin flip was going to be a pretty low bar. As you saw from the sneak peak of my results above, that turned out not to be the case. Reckon you can do better? Skip ahead to take the test and find out. (In fact, if you could find a way to test this effectively, I wonder if it would make a good qualifying question for the next moz ranking factors survey. Should you only listen only to the opinion of those experts who are capable of answering with reasonable accuracy? Note that my test that follows isn’t at all rigorous because you can cheat by Googling the keywords — it’s just for entertainment purposes). Take the test and see how well you can answerWith my curiosity piqued, I put together a simple test, thinking it would be interesting to see how good expert SEOs actually are at this, as well as to see how well laypeople do. I’ve included a bit more about the methodology and some early results below, but if you'd like to skip ahead and test yourself you can go ahead here. Note that to simplify the adversarial side, I’m going to let you rely on all of Google’s spam filtering — you can trust that every URL ranks in the top 10 for its example keyword — so you're choosing an ordering of two pages that do rank for the query rather than two pages from potentially any domain on the Internet. I haven’t designed this to be uncheatable — you can obviously cheat by Googling the keywords — but as my old teachers used to say: "If you do, you’ll only be cheating yourself." Unfortunately, Google Forms seems to have removed the option to be emailed your own answers outside of an apps domain, so if you want to know how you did, note down your answers as you go along and compare them to the correct answers (which are linked from the final page of the test). You can try your hand with just one keyword or keep going, trying anywhere up to 10 keywords (each with a pair of pages to put in order). Note that you don’t need to do all of them; you can submit after any number. You can take the survey either for the US (google.com) or UK (google.co.uk). All results are considering only the "blue links" results — i.e. links to web pages — rather than universal search results / one-boxes etc. What do the early responses show?Before publishing this post, we sent it out to the @distilled and @moz networks. At the time of writing, almost 300 people have taken the test, and there are already some interesting results: It seems as though the US questions are slightly easierThe UK test appears to be a little harder (judging both by the accuracy of laypeople, and with a subjective eye). And while accuracy generally increases with experience in both the UK and the US, the vast majority of UK respondents performed worse than a coin flip: Some easy questions might skew the data in the USDigging into the data, there are a few of the US questions that are absolute no-brainers (e.g. there's a question about the keyword [mortgage calculator] in the US that 84% of respondents get right regardless of their experience). In comparison, the easiest one in the UK was also a mortgage-related query ([mortgage comparisons]) but only 2/3 of people got that right (67%). Compare the UK results by keyword... ...To the same chart for the US keywords: So, even though the overall accuracy was a little above 50% in the US (around 56% or roughly 5/9), I’m not actually convinced that US SERPs are generally easier to understand. I think there are a lot of US SERPs where human accuracy is in the 40% range. The Dunning-Kruger effect is on displayThe Dunning-Kruger effect is a well-studied psychological phenomenon whereby people “fail to adequately assess their level of competence,” typically feeling unsure in areas where they are actually strong (impostor syndrome) and overconfident in areas where they are weak. Alongside the raw predictions, I asked respondents to give their confidence in their rankings for each URL pair on a scale from 1 (“Essentially a guess, but I’ve picked the one I think”) to 5 (“I’m sure my chosen page should rank better”). The effect was most pronounced on the UK SERPs — where respondents answering that they were sure or fairly sure (4–5) were almost as likely to be wrong as those guessing (1) — and almost four percentage points worse than those who said they were unsure (2–3): Is Google getting so me of these wrong?The question I asked SEOs was “which page do you think ranks better?”, not “which page is a better result?”, so in general, most of the results say very little about whether Google is picking the right result in terms of user satisfaction. I did, however, ask people to share the survey with their non-SEO friends and ask them to answer the latter question. If I had a large enough sample-size, you might expect to see some correlation here — but remember that these were a diverse array of queries and the average respondent might well not be in the target market, so it’s perfectly possible that Google knows what a good result looks like better than they do. Having said that, in my own opinion, there are one or two of these results that are clearly wrong in UX terms, and it might be interesting to analyze why the “wrong” page is ranking better. Maybe that’ll be a topic for a follow-up post. If you want to dig into it, there’s enough data in both the post above and the answers given at the end of the survey to find the ones I mean (I don’t want to spoil it for those who haven’t tried it out yet). Let me know if you dive into the ranking factors and come up with any theories. There is hope for our ability to fight machine learning with machine learningOne of the disappointments of putting together this test was that by the time I’d made the Google Form I knew too many of the answer to be able to test myself fairly. But I was comforted by the fact that I could do the next best thing — I could test my neural network (well, my model, refactored by our R&D team and trained on data they gathered, which we flippantly called Deeprank). I think this is fair; the instructions did say “use whatever tools you like to assess the sites, but please don't skew the results by performing the queries on Google yourself.” The neural network wasn’t trained on these results, so I think that’s within the rules. I ran it on the UK questions because it was trained on google.co.uk SERPs, and it did better than a coin flip: So maybe there is hope that smarter tools could help us continue to answer questions like “why is our competitor outranking us on this search?”, even as Google’s black box gets ever more complex and impenetrable. If you want to hear more about these results as I gather more data and get updates on Deeprank when it’s ready for prime-time, be sure to add your email address when you: Take the test (or just drop me your email here)Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read! via The Moz Blog http://tracking.feedpress.it/link/9375/4554954
Originally posted on Google Cloud Platform Blog Posted by Sara Robinson, Developer Advocate
At Google I/O this May, Firebase announced a new suite of products to help developers build mobile apps. Firebase Analytics, a part of the new Firebase platform, is a tool that automatically captures data on how people are using your iOS and Android app, and lets you define your own custom app events. When the data's captured, it’s available through a dashboard in the Firebase console. One of my favorite cloud integrations with the new Firebase platform is the ability to export raw data from Firebase Analytics to Google BigQuery for custom analysis. This custom analysis is particularly useful for aggregating data from the iOS and Android versions of your app, and accessing custom parameters passed in your Firebase Analytics events. Let’s take a look at what you can do with this powerful combination. How does the BigQuery export work?After linking your Firebase project to BigQuery, Firebase automatically exports a new table to an associated BigQuery dataset every day. If you have both iOS and Android versions of your app, Firebase exports the data for each platform into a separate dataset. Each table contains the user activity and demographic data automatically captured by Firebase Analytics, along with any custom events you’re capturing in your app. Thus, after exporting one week’s worth of data for a cross-platform app, your BigQuery project would contain two datasets, each with seven tables: Diving into the dataThe schema for every Firebase Analytics export table is the same, and we’ve created two datasets (one for iOS and one for Android) with sample user data for you to run the example queries below. The datasets are for a sample cross-platform iOS and Android gaming app. Each dataset contains seven tables -- one week’s worth of analytics data. The following query will return some basic user demographic and device data for one day of usage on the iOS version of our app:
Since the schema for every BigQuery table exported from Firebase Analytics is the same, you can run any of the queries in this post on your own Firebase Analytics data by replacing the dataset and table names with the ones for your project. The schema has user data and event data. All user data is automatically captured by Firebase Analytics, and the event data is populated by any custom events you add to your app. Let’s take a look at the specific records for both user and event data. User dataThe user records contain a unique app instance ID for each user ( user_dim.app_info.app_instance_id in the schema), along with data on their location, device and app version. In the Firebase console, there are separate dashboards for the app’s Android and iOS analytics. With BigQuery, we can run a query to find out where our users are accessing our app around the world across both platforms. The query below makes use of BigQuery’s union feature, which lets you use a comma as a UNION ALL operator. Since a row is created in our table for each bundle of events a user triggers, we use EXACT_COUNT_DISTINCT to make sure each user is only counted once:
User data also includes a user_properties record, which includes attributes you define to describe different segments of your user base, like language preference or geographic location. Firebase Analytics captures some user properties by default, and you can create up to 25 of your own.A user’s language preference is one of the default user properties. To see which languages our users speak across platforms, we can run the following query:
Event dataFirebase Analytics makes it easy to log custom events such as tracking item purchases or button clicks in your app. When you log an event, you pass an event name and up to 25 parameters to Firebase Analytics and it automatically tracks the number of times the event has occurred. The following query shows the number of times each event in our app has occurred on Android for a particular day:
If you have another type of value associated with an event (like item prices), you can pass it through as an optional value parameter and filter by this value in BigQuery. In our sample tables, there is a spend_virtual_currency event. We can write the following query to see how much virtual currency players spend at one time:
Building complex queriesWhat if we want to run a query across both platforms of our app over a specific date range? Since Firebase Analytics data is split into tables for each day, we can do this using BigQuery’s TABLE_DATE_RANGE function. This query returns a count of the cities users are coming from over a one week period:
We can also write a query to compare mobile vs. tablet usage across platforms over a one week period:
Getting a bit more complex, we can write a query to generate a report of unique user events across platforms over the past two weeks. Here we use PARTITION BY and EXACT_COUNT_DISTINCT to de-dupe our event report by users, making use of user properties and the user_dim.user_id field:
If you have data in Google Analytics for the same app, it’s also possible to export your Google Analytics data to BigQuery and do a JOIN with your Firebase Analytics BigQuery tables.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
October 2016
Categories |