专注收集记录技术开发学习笔记、技术难点、解决方案
网站信息搜索 >> 请输入关键词:
您当前的位置: 首页 > Go

Google IO大会的正题演讲 Android系统在垃圾回收

发布时间:2011-06-29 18:11:38 文章来源:www.iduyao.cn 采编人员:星星草
Google IO大会的主题演讲 Android系统在垃圾回收
3年前的Google IO大会的主题演讲 Google IO 2011 Memory management for Android Apps,该演讲介绍了Android系统在垃圾回收上的变化和如何发现并内存泄露以及如何管理Android中的内存。

原文:  (英文并不是很复杂稍微有点英语水平是可以看懂的,是在不行的话,我只能说,译文见底部)

Hi everybody,

My name’s Patrick Dubroy and today I’m going to talk to you about memory management for Android. So I’m really happy to see so many people here who care about memory management, especially near the end of the day.

So let’s get started. So I’m sure you all remember this device. This is the T-Mobile G1. Hugo talked about it in the keynote yesterday. It was released about two and a half years ago. So, is there anybody here who actually developed on the G1? All right, impressive. That’s about maybe 40% of the room. So you may remember that the G1 came with 192 megabytes of RAM. And in fact, most of that was used up by the system. There wasn’t a whole lot left over for apps.

Fast forward a few years later, we have the Motorola Xoom released just a couple of months ago. The Xoom comes with a gigabyte of RAM. Now some people might hear this and think, OK, my troubles are over. I never have to worry about memory management again. And of course, given that we have a whole room here, you guys are all smart people and you realize that that’s not true. And there are a couple of reasons for this. First of all, the Xoom has six and a half times the resolution that the G1 had. So you’ve got six and a half times as many pixels on screen. That means you’re probably going to need to use more memory. You got more bitmaps for example. The other thing is that on tablets, you really want to create a new kind of application. You know, the rich, immersive applications, like this YouTube app, for example. These are going to take a lot of memory. There’s tons of bitmaps in here. Those use up a lot of memory. Also, the Xoom we’re talking about a pretty new device. This is basically bleeding edge hardware. Most your customers are not going to be using something that’s only two months old. So of course, you want to support people who are using older hardware as well.

Finally, maybe you’re all familiar with Parkinson’s Law, which says that work always take as much time as you have. And really, it’s kind of the same for software. So, no matter how much memory you have, you’re going to find a way to use it and wish you had just a little bit more.

What I want to talk about today are basically two different things. First of all, I want to cover some of the changes that we’ve made in Gingerbread and Honeycomb that affect how your app uses memory. That’s your cameo. All right, so as I was saying, there are two different things I want to cover today. So first of all, I want to talk about some of the changes we’ve made in Gingerbread and Honeycomb that affect how your apps use memory. And basically, memory management in general. In the second half of the talk I want to talk about some tools you can use to better understand how your app is using memory. And if you have memory leaks,how you can figure out where those memory leaks are.

So just to set expectations for this talk, I’m going to make them some assumptions about the stuff you’ve done and it’ll really help you get the most out of this if you’re familiar with these concepts. So I’m hoping that you’ve all written Android apps before. And it looked like about half the room had developed on the G1, so that’s probably true. I hope that most of you have heard of the Dalvik heap. You have a basic idea of what I’m talking about when I talk about heap memory. I’m sure you’re familiar with the garbage collector. You have a basic idea hopefully of what garbage collection is and how it works. And probably, most of you have seen an OutOfMememoryError before and you have a basic idea of why you get it and what you can do to deal with it.

So first, let’s talk about heap size. So you may know that in Android, there’s a hard limit on your application’s heap size. And there’s a couple reasons for this. So first of all, one of the great features of Android is that it has full multitasking. So you actually have multiple programs running at once. And so obviously, each one can’t use all of your devices memory. We also don’t want a runaway app to just start getting bigger and bigger and bloating the entire system. You always want your dialer to work, your launcher the work, that sort of thing. So there’s this hard limit on heap size and if your application needs to allocate more memory and you’ve gone up to that heap size limit already,then you’re basically going to get an out of memory error. So this heap size limit is device dependent. It’s changed a lot over the years. On the G1 it was 16 megabytes. On the Xoom it’s now 48 megabytes. So it’s a little bit bigger.

If you’re writing an app and you want to know, OK, how much heap space do I have available? You know, maybe you want to decide how much stuff to keep in a cache for example. There’s a method you can use in ActivityManager, getMemoryClass that will return an integer value in megabytes, which is your heap size. Now these limits were designed assuming that you know almost any app that you would want to build on Android should be able to fit under these limits.

Of course, there are some apps that are really memory intensive. And as I said, on the tablet, we really want to build almost a new class of application. It’s quite a different than the kind of things you were building on phones. So we thought, if someone wants to build an image editor, for example, on the Xoom, they should be able to do that. But an image editor’s a really memory intensive application. It’s unlikely that you could build a good one that used less than 48 megabytes of heap. So in Honeycomb we’ve added a new option that allows applications like this to get a larger heap size. Basically, , you can put something in your AndroidManifest, largeHeap equals true. And that will allow your application to use more heap. And similarly, there’s a method you can use to determine how much memory you have available to you. The ActivityManager getLargeMemoryClass method again, will return an integer value of this large heap size.

Now before we go any further, I want to give a big warning here. You know, this is not something you should be doing just because you got an out of memory error, or you think that your app deserves a bigger heap. You’re not going to be doing yourself any favors because your app is going to perform more poorly because bigger heap means you’re going to spend more time at garbage collection. Also, your users are probably going to notice because all their other apps are getting kicked out of memory. It’s really something you want to reserve for when you really understand OK, I’m using tons of memory and I know exactly why I’m using that memory, and I really need to use that memory. That’s the only time that you should be using this large heap option. So I mentioned garbage collection. And that it takes longer when you have a bigger heap. Let’s talk a little bit about garbage collection. So I just want to go through a quick explanation here of what garbage collection is doing. So basically, you have a set of objects. First of all, let’s say these blue objects here, these are the objects in your heap. And they form a kind of graph. They’ve got references to each other. Some of those objects are alive, some of them are not used anymore. So what the GC does is it starts from a set of objects that we call the roots. These are the objects that the GC knows is alive.

For example, variables that are alive on a thread stack, J and I global references, we treat objects in the zygote as heap, or as roots as well. So basically, the GC starts with those objects and starts visiting the other objects. And basically, traversing through the whole graph to find out which objects are referenced directly or indirectly from the GC roots. At the end of this process, you’ve got some objects left over, which the GC never visited. Those are your garbage. They can be collected. So it’s a pretty simple concept. And you can see why when I said that you have bigger heaps you’re going to have larger pause times. Because the garbage collector basically has to traverse your entire live set of objects. If you’re using say the large heap option and you’ve got 256 megs of heap, well, that’s a lot of memory for the garbage collector to walk over. You’re going to see longer pause times with that.

We have some good news though. In Gingerbread, there have been some great changes to the garbage collector that make things a lot better. So in Gingerbre– sorry, pre-Gingerbread, the state of the garbage collector was that we had to stop the world collector. So what this means is that basically, when a garbage collection is in progress, your application is stopped. All your application threads are completely stopped while the collection is proceeding. This is a pretty standard things. These pauses generally tend to be pretty short. What we found was that pause times as heaps were getting bigger, these were getting to be a little bit too long. So we were seeing stuff up 50 to 100 milliseconds. And if you’re trying to build a really responsive app that kind of pause time is not really acceptable.

So in Gingerbread, we now have a concurrent garbage collector. It does most of its work concurrently, which means that your application is not stopped for the duration of the garbage collection. Basically, we have another thread that’s running at the same time as your application that can perform garbage collection work. You’ll see basically two short pauses. One at the beginning of a collection and one near the end. But these pause times are going to be much, much lower. Usually you’ll see two, three, four, or five milliseconds for your pause time. So that’s a significant improvement. Pause times about 10% of what they used to be. So that’s a really good change that we have in Gingerbread.

Now if you’re building memory heavy apps, there’s a good chance you’re using a lot of bitmaps. We found that in a lot of apps you have maybe 50 or 75% of your heap is taken up by bitmaps. And in Honeycomb because you’re going to be developing on tablets, this gets even worse. Because your images are bigger to fill the screen. So before Honeycomb, the way we managed bitmaps was this. So the blue area up here is the Dalvik heap and this yellow object is a bitmap object. Now bitmap objects are always the same size in the heap no matter what their resolution is. The backing memory for the bitmap is actually stored in another object. So the pixel data is stored separately. Now before Honeycomb what we did was this pixel data was actually native memory. It was allocated using malloc outside the Dalvik heap. And this had a few consequences. If you wanted to free this memory you could either call recycle, which would free the memory synchronously. But if you didn’t call recycle and you were waiting for your bitmap to get garbage collected,we had to rely on the finalizer to free the backing memory for the bitmap. And if you’re familiar with finalization, you probably know that it’s an inherently unreliable process. Just by its nature it takes several collections, usually for finalization to complete. So this can cause problems with bitmap heavy app as you had to wait for several garbage collections before your pixel data was reclaimed. And this could be a lot of memory because bitmap pixel data is quite a significant portion of the heap. This also made things harder to debug. If you were using standard memory analysis tools like the Eclipse Memory Analyzer, it couldn’t actually see this native memory. You would see this tiny bitmap object. Sure, but that doesn’t tell you very much. You don’t mind if you have a 10 by 10 bitmap. But if you have a 512 by 512 bitmap it’s a big difference. Finally, the other problem that we had with this approach was that it required full stop the world garbage collections in order to reclaim the backing memory, assuming that you didn’t call recycle,that is.

The good news is in Honeycomb we’ve changed the way this works. And the bitmap pixel data is now allocated inside the Dalvik heap. So this means it can be freed synchronously by the GC on the same cycle that your bitmap gets collected. It’s also easier to debug because you can see this backing memory in standard analysis tools like Eclipse Memory Analyzer. And I’m going to do a demo in a few minutes and you’ll see really, how much more useful this is when you can see that memory. Finally, this strategy is more amenable to concurrent and partial garbage collections, which means we can generally keep those pause times down. So those are the two biggest changes that we’ve introduced in Gingerbread and Honeycomb that affect how your apps use memory.

And now I want to dive in to some tools that you can use to better understand how much memory your app’s using. And if you have memory leaks, better understanding where those leaks are and generally, how your app is using memory. The most basic tool you can use for understanding your apps memory usage is to look at your log messages. So these are the log messages that you see in DDMS in the logcat view. You can also see them at the command line using adb logcat. And every time a garbage collection happens in your process, you’re going to see a message that looks something like this one. And I just want to go through the different parts of this message, so you can better understand what it’s telling you.

The first thing we have is the reason for the garbage collection. Kind of what triggered it and what kind of collection is it. This one here was a concurrent collection. So a concurrent collection is triggered by basically, as your heap starts to fill up, we kick off our concurrent garbage collection so that it can hopefully complete before your heap gets full.

Other kinds of collections that you’ll see in the log messages. GC for malloc is one of them. That’s what happens when say, we didn’t complete the concurrent collection in time and your application had to allocate more memory. The heap was full, so we had to stop and do a garbage collection.

You’ll see GC external alloc, which is for externally allocated memory, like the bitmap pixel data which I mentioned. It’s also used for NIO direct byte buffers. Now this external memory as I mentioned, has gone away in Honeycomb. Basically everything is allocated inside the Dalvik heap now. So you won’t see this in your log messages in Honeycomb and later.

You’ll also see a message if you do an HPROF, if you create an HPROF profile. And finally, the last one I want to mention is GC explicit. You’ll see this generally when you’re calling system.gc, which is something that you know you really should avoid doing. In general, you should trust in the garbage collector. We’ve got some information also about the amount of memory that was freed on this collection. There’s some statistics about the heap. So the heap in this case, was 65% free after the collection completed. There’s about three and a half megs of live objects and the total heap size here is listed as well. It’s almost 10 megs, 9,991 K. There’s some information about externally allocated memory, which is the bitmap pixel data and also, NIO direct byte buffers. The two numbers here, the first number is the amount of external memory that your app has allocated. The second number is a sort of soft limit. When you’ve allocated that much memory, we’re going to kick off a GC. Finally, you’ll see the pause times for that collection. And this is where you’re going to see the effect of your heap size. Larger heaps are going to have larger pause times.

The good news is for a concurrent collection, you’re going to see these pause times generally pretty low. Concurrent collections are going to show two pause times. There’s one short pause at the beginning of the collection and one most of the way through. Non-concurrent collections you’ll see a single pause time, and this is generally going to be quite a bit higher. So looking at your log messages is a really basic way to understand how much memory your app is using. But it doesn’t really tell you, where am I using that memory? What objects are using this memory?

And the best way to do that is using heap dumps. So a heap dump is basically a binary file that contains information about all of the objects in your heap. You can create a heap dump using DDMS by clicking on the icon, this somewhat cryptic icon. I think that mentioned it in the previous talk. There’s also an API for creating heap dumps. In general, I find using DDMS is fine. There are times when you want to create a heap dump at a very, very specific point in time. Maybe when you’re trying to track down a memory leak. So it can be helpful to use that API. You may need to convert the heap dump to the standard HPROF format. You’ll only need to do that if you’re using the standalone version of DDMS. If you’re using the Eclipse plug-in, the ADT plug-in, it will automatically convert it. But the conversion is pretty simple. There’s a tool in the Android SDK, which you can use to do it. And after you’ve converted it to the standard HPROF format, you can analyze it with standard heap analysis tools, like MAT or jhat.

And I’m going to show an example of MAT, which is the shorter way of saying the Eclipse Memory Analyzer. And before I jump into the demo, I want to talk about memory leaks. So there’s kind of a misconception that in a managed run time, you can’t have memory leaks. And I’m sure you guys know that’s not true. Having a garbage collector does not prevent memory leaks. A memory leak in a managed runtime is a little bit different though, than a memory leak in C or C++. Basically, a leak is when you have a reference to an unused object that’s preventing that object from being garbage collected. And sometimes you can have a reference to a single object, but that object points to a bunch of other objects. And basically, that single reference is preventing a large group of objects from being collected.

One thing to watch out for in Android. I see people sometimes and I’ve done this myself, accidentally create a memory leak by holding a long lived reference to an activity. So you need to be really careful with that and maybe it’s you’re holding a reference to the context and that’s what happens. You can also do it by keeping a long lived reference to a view or to a drawable, because these will also hold a reference to the activity that they were originally in. And the reason that this is a problem, the reason this causes a memory leak is this. So you’ve got your activity, it contains a viewgroup, a linearlayout or something, and it contains some views. And we’ve got a reference from the framework to the currently visible activity.

But in Android, when you have a rotation event, so you rotate your device, what we do is actually build up a new view hierarchy because you need to load new resources, you may have a brand new layout for landscape or portrait, you may have differently sized icons or bitmaps. And then we basically remove the reference to the old view hierarchy and point to the new one. And the idea is that this old view hierarchy sure get garbage collected. But if you’re holding a reference to that, you’re going to prevent it from getting garbage collected. And that’s why it’s a problem to hold the long lived reference to an activity or even to a view because in fact, the arrows connecting these objects should be going in both directions. Because you’ve got pointers all the way up. So if you do have a memory leak, a really good way to figure out where it is using the Eclipse Memory Analyzer.

I’m going to do a demo of that, but I want to first cover some of the concepts behind the Memory Analyzer, so that when I do the demo you’ll better understand what I’m showing you. So the Eclipse Memory Analyzer can be downloaded from the eclipse.org site. It comes in a couple of flavors. There’s an Eclipse plug-in version, there’s also a standalone version. I’m going to be demonstrating the standalone version. I just personally prefer not to have Eclipse have all these different plug-ins. I kind of like to keep things a little bit separate. But they’re basically the same. Now, Memory Analyzer has some important concepts that you’ll see a lot.

It talks about shallow heap and retained heap. So the shallow heap of an object is just how large is this object, it’s size and bytes. It’s really simple. So let’s say that all of these objects are 100 bytes. So they’re shallow heap is 100 bytes. It’s easy. The retained heap is something different. Basically, the retained heap says, if I have an object here and I were to free this object, what other objects is it pointing to? And could those be freed at the same time? And so you calculate the retained heap in terms of, what is the total size of objects that could be freed by freeing this one object? So maybe it’s best to understand with an example. So this object down on the right-hand side in yellow, this guy doesn’t point to any other objects. So his retained size is pretty easy to calculate. His retained heap is 100. This guy on top, he has a pointer to one other object. But he’s not holding that object alive. There are other pointers to that same object. So this guy’s retained heap is also just 100 bytes. Because if we were to remove this object, it’s not going to free up any other objects. The object down at the end however, it’s basically keeping all the other objects alive. So its retained heap is 400 because if we could free that object, we could free all the other objects well, on this slide anyway. So you might be wondering, how do you go about calculating the retain heap?

So you’re going to see this in Memory Analyzer. And actually, knowing how it calculates the retained heap is quite useful. So the Memory Analyzer uses a concept called the denominator tree. This is a concept from graph theory. Basically, if you have a node A and a node B, A is said to be the dominator of B if every path to B goes through A. And so you might see how that could help us figure out what the retained heap of an object is. So another example here. So let’s start with A. It’s kind of the root. B and C are only accessible through A. So it’s pretty straightforward. They’re children of A and the dominator tree. E is also only accessible through C. So it’s a child of C in the dominator tree. D is a little bit interesting here. D can be accessed through B or C, but A is on every path to D. So that means that A is the parent of D and the dominator tree. And now you’re going to see this dominator tree concept also pop up in Memory Analyzer in its UI. And it can be really helpful for tracking down memory leaks.

So let’s jump in and do a demo of debugging and memory leak with MAT. So what I’m going to use for this demo is the Honeycomb gallery’s sample application. It’s a simple application that comes with the Android SDK the basically just demonstrates some of the features of Honeycomb. And really, all it is is a little app the lets you page through some photos. Pretty simple. Now I’ve done something kind of naughty here. I’ve introduced a memory leak into this application. And I’ll show you how I’ve done that. Sorry, I better switch to the slides again.

So you’ll see here I have the source code, an excerpt of the source code from the activity. And so what I’ve done here is I’ve introduced this inner class called leaky. And this is not a static inner class. So you may know that if you create a non-static inner class, it actually keeps a reference to the enclosing object. And this is because from a non-static inner class, you can actually refer to the instance variables of the enclosing object. So it’s going to retain a reference to the activity here. That’s fine as long as this object doesn’t live longer than the activity. But I’ve got this static field and statics live longer than any particular instance. And in my on create method, what I’ve done is instantiated the leaky object and stored it into the static field. So if you want to be able to visualize this, I basically got my view hierarchy that starts with the main activity. I’ve instantiated this leaky object and he has a reference to the main activity because that was its enclosing class. Finally, I have the main activity class, which is conceptually a different area of memory than any particular instance. And there’s a static variable pointing to the leaky object. So maybe you can see how this is going to cause a memory leak when I rotate the screen. So let’s jump in and take a look at this memory leak. So if you want to figure out whether you have a memory leak, one of the easiest ways is to just kind of look at your log messages. So I’m just going to do that. I’m going to do it at the command line. I can just type logcat. And I want to restrict it to the particular process that I’ve got running here. I don’t want to see all of the log messages on the system. So I’m just going to grab on the process ID. There we see a bunch a log messages, including some garbage collection messages. And the number you want to look at is basically the first number here in the 9805K. The first number in your heap size. This is the amount of live objects in the system. And if you’re looking for a memory leak, that’s what you want to look at. So I’m going to flip through some of the photos here. And you’ll see that that number stays pretty constant. We’re up to 9872. But basically, the heap usage is pretty constant. Now when I rotate this device, we’re going to be a bunch of garbage collections happen. That heap usage goes up and it doesn’t go down again. So we’re now up to 12 megs of heap. So we leaked about two and a half megs. So whenever you see your memory go up in kind of a step function like that, it steps up and just never goes back down, that’s a good sign you have a memory leak.

So once you know that you have a leak, what you’ll want to do is create a heap dump, so you can go about debugging it. So I’m going to do that right now. I’ll open up DDMS. You just need to select the process that you care about and click on this icon up in the toolbar that says dump HPROF file. That’ll create a heap dump. It takes a few seconds because it’s dumping basically a huge binary file out to disk. And then I can just save it in a file called dump.hprof. And then, because I’m using this standalone version of DDMS here, I need to convert this file. As I mentioned, if you’re using the ADT plug-in for Eclipse and using DDMS in there, you don’t need to go through this conversion step. But it’s really simple. Now that I’ve converted it, I can open up the Eclipse Memory Analyzer and take a look at this heap dump. So there’s not much to see in the Memory Analyzer until you’ve opened up a heap dump, which we can do just from the file menu. Open heap dump. And I’ll open up this converted heap dump, which I just created. Doesn’t take very long for it to load up.

And the first thing you’ll see is this pie chart. This is showing the biggest objects in the system by retained size. Now this alone doesn’t really tell us too much. You can see that down in the bottom left here, when I mouse over the various slices of the pie, it’s telling me what kind of object I’ve got. But that doesn’t really tell us too much. If we want to get some more information, you want to look down here. There are two views. The histogram view and the dominator tree. And these are the ones that I find most useful and I’m going to show to you. Let’s take a look at the dominator tree. You remember the concept I explained. This is how it can be useful in tracking down a memory leak. So what we’ve got here is basically a list of instances or a list of objects in this system organized. There’s a column here. Organized by the amount of retained heap. So when you’ve got a memory leak, looking at the amount of retained heap is often a good way to look at things because that’s going to have the biggest effect on how much memory you’re using. And chances are, if you’ve noticed that you’ve got a leak, you’re leaking a significant amount. So let me just Xoom in here. Hopefully you guys can see this a bit better. So at the very top of the list we have the resources class. That’s not too surprising because our resources we have to load lots of bitmaps. That’s going to hold lots of memory alive. That’s fine. These two bitmap objects are interesting. I’ve got these two large bitmaps, more than two and a half megs each. It’s funny because that sounds about like the amount of memory that I was leaking. So if I want to investigate a bit further, I can right click on one of these objects and choose path to GC roots. And I’ll chose excluding weak references because I want to see what’s keeping that object alive. And a weak reference is not going to keep it alive. So this opened up a new tab and what do you know? It actually points right to my leak. So when you’re creating leaks in your application, make sure you name it something really helpful like this so you can find it easily.

AUDIENCE: [LAUGHTER]

PATRICK DUBROY: So some of you might have noticed this, that if there’s only a single path to this object, because that’s all I can see here, why didn’t this leak object show up in the dominator tree? I mentioned that the dominator tree should show you the largest objects by their amount of retained heap. And well this is a single object that’s responsible for retaining the bitmap. So the reason for that is that the Eclipse Memory Analyzer, when it calculates the dominator tree, it actually doesn’t treat weak references separately. It basically just treats them like a normal reference. So you’ll see that if I actually right click on this guy again and say path to GC roots, and say with all references, then there’s actually another path to this object. But it’s a weak reference. Generally you don’t need to be too concerned about weak references because they’re not going to prevent your object from being garbage collected. But that’s why the leak object didn’t show up in the dominator tree. So the dominator tree is one really, really useful way of tracking down a memory leak. Another thing I like to use is the histogram view. So I mentioned that in Android, it’s common to leak memory by keeping long lived references to an activity. So you may want to actually go and look at the number instances of your main activity class that you have.

And the histogram view lets you do that. So the histogram view just shows a list of all the classes in its system and right now it’s sorted based on the amount of shallow heap occupied by classes in the system. So at the very top there, we see we have byte arrays. And the reason for this is that byte arrays are now the backing memory for pixel data. And you know, this is a perfect example of why it’s really useful that we now have the pixel data inside the heap. Because if you’re using this on Gingerbread or earlier, you’re not going to see byte arrays at the top. Because that memory with allocated in native memory. So we could also, if we were concerned about these byte array objects, we might want to right click on it and say list objects with incoming references. And we’ve got our two large byte array objects here. We can right click on one and say, path to GC roots, excluding weak references. So this guy looks to have several paths, which keep it alive. Nothing looks out of the ordinary to me. And when you’re trying to find a memory leak, there’s not really a magic answer for how you find a leak. You really have to understand your system and understand what objects are alive, why they’re alive, during the various parts of your application. But you’ll see if I look at this other byte array object, and again, do path to GC roots excluding weak references, well, I’ve found my leak again. So this was another way that I might have found this if it weren’t so obvious from the dominator tree.

The histogram view can also help us look for our activity instances. So there’s a lot of classes obviously in the system. Our activity is not here. There’s 2,200 classes. But luckily, Eclipse Memory Analyzer has this handy little filter view at the top. You can just start typing a regular expression. And it’ll return you all the classes that match that. So here we’ve got our main activity. And it tells us that there are actually two instances of this main activity. And that should kind of be a red flag. Normally you should expect to see only a single instance of your main activity alive.Now I mentioned during the screen rotation, we build up a new view hierarchy, there’s going to be a brief time where there’s two instances alive. But for the most part, you should expect to see one here. So I might think, OK, this is a red flag.

Let’s take a look. So I can right click on this object and list objects with incoming references. So I want to look at what instances do I have and what’s pointing to them? And so I’ve got two instances here. If I right click on one of them and choose path to GC roots, excluding weak references, I’ve again, found my memory leak. And in looking at this, I might realize that, oh, I really didn’t intend to do this. I didn’t mean to keep this reference there. So that’s another way that you could have found the leak. So now that we’ve discovered where our memory leak is, why don’t we actually go ahead and fix it. So in this case, the problem was that we had a non-static inner class. So we could fix this by making it a static inner class. And then it wouldn’t actually keep a reference to the enclosing activity. The other thing we could do is actually just not store it in a static variable. So it’s fine if this leaky object has a reference to the activity, as long as it doesn’t live longer than the activity. So let’s do that. Let’s just make this a regular instance variable and not a static. So then I can go in here recompile this and push it to the device. And hopefully, we should see that our memory leak has been eliminated. Sorry, what we actually want to do is look at our log output in order to see how much memory we’re using. So I’m just going to fire up the process here, take a look at the process ID. And again, just do adb logcat just on that process. So as I page through the photos again, we see lots of GC messages. When I rotate, we’re going to see the memory usage goes up for a minute there. But after a few collections, it does go back down to its previous value. So we’ve successfully eliminated the leak there. And this is great. You always want to eliminate memory leaks.

So that’s an example of using the Eclipse Memory Analyzer to debug a memory leak. Eclipse Memory Analyzer is a really powerful tool. It’s a little bit complex. It actually took me quite a while to figure out that these were the two best tools for the job. So you really want to watch out for these memory leaks. So I gave an example here of retaining a long lived reference to an activity. If you’ve got our context, a view, a drawable, all of these things you need to watch out for. Don’t hold long lived references to those. It can also happen with non-static inner classes, which is what I demonstrated there as well. Runnable is actually one that can bite you sometimes. You know, you create a new runnable. You have a deferred event that’s going to run in like five minutes. If user rotates the screen that deferred runnable is going to hold your previous activity instance alive for five minutes. So that’s not good.

You also want to watch out for caches. Sometimes you have a cache and you want to keep memory alive, so that you can load images faster let’s say. But you may inadvertently hold things alive too long. So that covers basically, the core parts of the Eclipse Memory Analyzer, and gives you a basic understanding of memory leaks. If you’d like to get more information about Memory Analyzer, the download link you can find on the eclipse.org/mat site. Markus Kohler who’s one of the original team members of Eclipse Memory Analyzer, he has a blog called the Java Performance Blog. This is really great. He’s got tons of great articles on there about MAT and different ways you can use it to understand your applications memory usage.

I’ve also got an article that I wrote on the Android Developer Blog called memory analysis for Android applications. It covers a lot of the same stuff that I did in my demo here. And Romain Guy also has a good article on avoiding memory leaks in Android. So I hope that’s been helpful, I hope you guys have a better understanding now of how you can figure out your apps memory usage.

And I’ve talked about two of the biggest changes that we’ve made in Gingerbread and Honeycomb that affect how your apps use memory. Thanks.

[APPLAUSE]

So I can take questions from the floor if anyone has any. Or you all want to get out and get to a pub and have a beer?

AUDIENCE: Hi, you mentioned that if you use NIO in Honeycomb your objects are going to be not in native memory and now they’re going to be managed memory. How does that affect performance if you’re doing a NIO, is that going to be any slower, like very intense on network?

PATRICK DUBROY: No, I mean it shouldn’t affect. So I should say that there is still a way to allocate native memory for your NIO byte buffers. I’m not that familiar with the NIO APIs, but I believe there’s a way in JNI you can allocate your own memory. So in that case, you’ll still be using native memory. But either way, it’s just memory. It’s just allocated in a different place. So there’s nothing that makes the Dalvik heap memory slower than other memory.

AUDIENCE: So you’re saying how in Honeycomb the bitmaps are stored in the Dalvik heap, but in previous versions to that it was stored on native memory. Does that mean that bitmaps had a different amount of heap size? Or is that still all counted in the 16 or 24 megabytes that previous versions had?

PATRICK DUBROY: Yeah, good question. The accounting limits are still the same. That was accounted for previously. You might have noticed if you ever ran into your heap limit, you would be looking at your heap size and like, I haven’t hit the limit yet, why am I’m getting out of memory? That was actually accounted for, so it was your total heap size plus the amount of externally allocated memory that was your limit. So that hasn’t changed.

AUDIENCE: Hello. I have a question on when does the garbage collector kicks in. Is is when a number of objects in memory or the size of the heap?

PATRICK DUBROY: Well, it depends on what kind of garbage collection you’re talking about. The concurrent garbage collector–

AUDIENCE: Yeah, the concurrent. Yes.

PATRICK DUBROY: Yeah, so that I believe is the amount of basically, how full your heap is getting.

AUDIENCE: Because I noticed that when you do a lot of [INAUDIBLE] provide operations, so you have like [INAUDIBLE] list of operations, the garbage collector kicks in. But actually don’t collect any objects because you’re just filling in the array of objects that you want to insert into a database. And that’s grow quite quickly. And that tends to slow down a bit, the application without actually solving any heap size.

PATRICK DUBROY: Yeah, I’m not sure if the GC looks at– so you’re basically saying, I guess, that the collector is kicking in. It’s not actually able to collect anything, so it shouldn’t–

AUDIENCE: But it keeps trying.

PATRICK DUBROY: Yeah, it should be smart enough. Yeah, I don’t believe we actually look at those kind of statistics yet. But I mean it seems reasonable. Yeah.

AUDIENCE: I was wondering if you guys have some plans for making a profiler for applications or more tools for analyzing memory and all that stuff?

PATRICK DUBROY: No plans that I know of. Is there anything in particular that you need? I mean I think the Eclipse Memory Analyzer is a really powerful tool and I use it in my day-to-day work quite a bit. So I’ve certainly never found that it it was missing certain features that I needed.

AUDIENCE: Yeah, probably because there are some old versions from Android that show memory leaks or something. But for example, on Eclair, there were some stuff with the– something there.

PATRICK DUBROY: Yeah, I mean we don’t have any immediate plans I don’t think to running specific tools.

AUDIENCE: OK, thank you.

PATRICK DUBROY: Oh, sorry I’ve been– yeah.

AUDIENCE: To my understanding, the native part of a bitmap memory before was actually an instance of the SKIA library, of one of the SKIA library bitmap classes. So is this still there or is it gone now that there is no more native memory allocated?

PATRICK DUBROY: No, SKIA is still part of this stack there. Basically at the point where SKIA calls out to allocate memory, we actually just call back into the VM and allocate the memory there rather than calling malloc. So it’s still basically the same mechanism, but the memory’s just coming from a different place.

AUDIENCE: OK. AUDIENCE: I thought that when I was using my application, I checked the heap size. While using the application the heap size was not significantly going up. But the amount of memory used by the application, which is listed in the applications tab under the running applications is going up significantly. Sometimes even doubling. I know that this is a different heap that is shown there. It’s actually the process heap, right? Can you tell me what the background of that is that this is shown there because might like– I don’t have a memory leak and users complain about my application leaking memory. Because for the user it looks like it’s leaking memory.

PATRICK DUBROY: Right. Because you’re saying there’s stuff that’s attributed to your process that are showing up in the– basically, in system memory?

AUDIENCE: Yeah. So it’s showing the system memory in the applications tab, which is not really linked to my heap memory. So that is going up, but I can only control the heap memory. If I don’t have a native application I cannot control everything else.

PATRICK DUBROY: I mean there are going to be various things in the system that are going to get larger. For example, like your JIT code caches. As the JIT kicks in and is allocating memory, like it needs to store the compiled code somewhere. So there’s definitely other parts of this system that allocate memory that’s going to kind of get charged to your application. But I can’t think of why. I can’t think of anything that would be out of the ordinary really that should cause problems.

AUDIENCE: But do you know if this will be changed maybe in the future? That this number is not shown there because for me, it doesn’t make sense to show this number to the end user because he doesn’t understand what it means.

PATRICK DUBROY: I see. Where is he seeing the number?

AUDIENCE: In the running applications tab. If he goes to settings, running applications, he can see the memory usage per application and that’s actually the system memory.

PATRICK DUBROY: I see. Yeah, I’m not sure what our plans are with that. Sorry. I can take a look and I’m not actually sure where it’s getting that number from.

AUDIENCE: OK, thanks.

AUDIENCE: My question’s about reasonable expectations of out of memory errors. Is it possible to completely eliminate them? We’ve been working for a while in getting rid of all the out of memory errors and down to one in about every 17,000 sessions. Should we keep troubleshooting. I mean, I’d like to get it down to zero, but is that reasonable or?

PATRICK DUBROY: So there are certain scenarios where if you’re really close to your memory limit, so if your applications live memory size is really close to that limit, the garbage collector’s fundamentally kind of asynchronous. So if you’re really close to the limit, there can be times where you’re just trying to allocate so fast that the garbage collector can’t keep up. So you can be actually sort of out running the garbage collector. So certainly it’s possible to build applications that never see an out of memory error. But on the other hand, there are certain types of applications that are going to be running really, really close to the limits. One thing you can use if you have caches or things that you can free up, there are several ways to figure out that you’re getting close to the heap memory limit. I believe there’s a callback you can get notification that we’re getting low on memory. Although, the name escapes me. But you can also look at that, the Activity Manager, get memory class to get a sense of how much memory you have available on the system. And you know, maybe you can keep like smaller caches or leave the initialize objects rather than initializing them all in the constructor or something like that. It really depends on the application whether you expect to be running close to that heap limit or not.

AUDIENCE: You recommended not to call system.gc manually if you can help it. Is there any way to reliably free bitmap memory pre-Honeycomb?

PATRICK DUBROY: Yes. Pre-Honeycomb?

AUDIENCE: Yes.

PATRICK DUBROY: You can call recycle on the bitmap.

AUDIENCE: Yeah, but it can still take several passes apparently.

PATRICK DUBROY: No. If you call recycle that will immediately free the backing memory. The bitmap itself, that’s like 80 bytes or something.

AUDIENCE: There are also bitmaps like drawables that you can’t manually recycle the bitmaps that the drawable object creates.

PATRICK DUBROY: OK.

AUDIENCE: The backing bitmaps for those.

PATRICK DUBROY: I see. No, I mean there are still some cases I guess where system.gc is the right approach.

[UNINTELLIGIBLE PHRASE]

PATRICK DUBROY: OK, which objects are you talking about in–

AUDIENCE: My experience is when I have image drawables that are used some where in my layout and I know they’re no longer needed. Some of them are fairly large and it seems like–

PATRICK DUBROY: You can call recycle on those I believe.

AUDIENCE: OK. My experience is that it will cause other problems when I do that.

PATRICK DUBROY: If you’re still using them, then you can’t– I mean, you can only recycle that when you’re not using it.

AUDIENCE: Sure. OK.

AUDIENCE: For native code that uses a lot of mallocs, what’s the best way to manage that memory?

PATRICK DUBROY: That’s a very good question. When you’ve got native code, I mean mostly what I was covering here was managing memory from the Dalvik side of things. I don’t know that I have any real pointers. I mean that’s one of the reasons why programming in a managed runtime is very nice. Is that you don’t have to deal with manually managing your memory. I don’t have any great advice for that.

AUDIENCE: Does the app that calls into the native libraries, is it aware of, at least on an aggravate level, how much memory is being used or is it completely a separate–

PATRICK DUBROY: I don’t believe there’s any way to account for if you’re calling into the library and it’s calling malloc. I don’t know that there’s any way to account for that memory from your application side.

AUDIENCE: But that garbage collector will run when you start allocating memory, will it not?

PATRICK DUBROY: It’ll run when you start allocating like objects in Dalvik. It doesn’t have any knowledge of calls to malloc.

AUDIENCE: You’ll just get an out of memory or a failed malloc if you–

PATRICK DUBROY: Yeah. Sure. It’s going to be the same mechanisms as any C or C++ program. Malloc is going to return a null pointer. Yes?

AUDIENCE: [UNINTELLIGIBLE PHRASE]

PATRICK DUBROY: Pardon me?

AUDIENCE: [UNINTELLIGIBLE PHRASE] PATRICK DUBROY: Oh, OK. That’s news to me. Malloc can’t fail on Android. 

AUDIENCE: [UNINTELLIGIBLE PHRASE]

PATRICK DUBROY: I see. OK.

AUDIENCE: Can you repeat that?

PATRICK DUBROY: Romain tells me that malloc can’t fail on Android.

AUDIENCE: [UNINTELLIGIBLE PHRASE]

PATRICK DUBROY: I see. So I think this is the old Linux lazy– yeah. It’ll successfully allocate the virtual memory, but Linux can actually hand out more virtual memory than it can actually commit. So you can get problems. Like when your system is totally, totally out of native memory, you’re going to see crashes.

AUDIENCE: So native memory is completely separate from anything Dalvik?

PATRICK DUBROY: Yes. Well, I mean, sorry, I should say, like Dalvik is still allocating its own memory like for the heap through the native mechanisms. So it’s reserving the same virtual memory pages that other applications are using.

AUDIENCE: But if your system memory is–

PATRICK DUBROY: Yeah, if your system memory is out, you’re in trouble.

AUDIENCE: But Dalvik won’t get a notice say, hey, better start garbage collecting?

PATRICK DUBROY: Well, no.

AUDIENCE: The flag for using larger heap, does that require a permission, like users permission or something like that?

PATRICK DUBROY: I can’t remember whether we added that or not. I don’t think that it does.

AUDIENCE: Like the whole– could it been like a permission thing? But if it’s not then–

PATRICK DUBROY: Yeah, I mean the idea I think is that– yeah, you’re right. I mean it can affect the system as a whole because you’re going to have apps that are using a lot more memory, which is why I gave that big warning, that this is not something that you should be using unless you know that you really need it.

AUDIENCE: Yeah. But [INAUDIBLE]. OK.

PATRICK DUBROY: I don’t think there’s a permission for it, though.

AUDIENCE: What if the app kind of runs in the background for weeks at a time? So I do everything I can to simulate a leak, click everywhere I can, but I see the leaks if the app runs two or three days and then I get [INAUDIBLE].

PATRICK DUBROY: One thing you could try is if you can use the APIs to determine how much free memory you have. I don’t know if there’s any way you can actually kind of notice in your application that it started leaking. But you could write out an HPROF file when you notice that you’ve gotten to a certain point, your heap is getting smaller and smaller. So there is some debug information there that you could use. So if you have like some beta testers, who could actually send you these dumps, then you could do that. So write out the HPROF file to SD card or something.

AUDIENCE: So maybe I can just write an HPROF file every–

PATRICK DUBROY: I wouldn’t do that. I mean they’re quite large. You don’t want to be doing that on a regular basis. But if you detect that things have gone really, really wrong and you’re about to die, in an alpha version or something for testing that’s one way you could do it. But I definitely wouldn’t recommend putting an app in the market that’s dumping like very large files to the SD card for no reason.

AUDIENCE: OK.

PATRICK DUBROY: OK, Thanks a lot.



译文:
如果你正在写一个应用程序,你想知道的,OK,多少堆空间我有可用?你知道吗,也许你想决定多少东西,以保持在例如缓存。有一个方法,你可以在ActivityManager,getMemoryClass使用将返回以MB为单位的整数值,这是你堆的大小。现在,这些限制被设计假设你知道几乎所有的应用程序,你会想建立在Android上应能适应在这些限制。

当然,也有一些应用程序,是真正占用大量内存。正如我所说,在平板电脑上,我们真的希望建立一个几乎全新类型的应用程序。这比那种事情,你正在建立手机上的完全不同。所以我们认为,如果有人想建立一个图像编辑器,例如,对Xoom的,他们应该能够做到这一点。但是,一个图像编辑器是一个非常内存密集型应用程序。这是不可能的,你可以建立一个良好的一个用了不到48兆字节的堆。因此,在蜂窝我们增加了一个新的选项,允许这样的应用程序,以获得更大的堆大小。基本上,你可以把东西在你AndroidManifest,largeHeap等于true。而这将让您的应用程序使用更多的堆。同样,还有你可以用它来确定您有多少内存提供给您的方法。该ActivityManager getLargeMemoryClass方法再次,将返回这个大堆大小的整数值。

现在摆在我们往前走了,我想给一个大的警告这里。要知道,这是不是你应该做的,只是因为你得到了一个内存不足的错误,或者您认为您的应用程序更值得如此深厚堆。你不打算做自己带来任何好处,因为你的应用程序会表现更差,因为更大的堆意味着你要花费更多的时间在垃圾收集。此外,你的用户可能会注意到,因为他们所有的其他应用程序越来越踢出内存。这真的是你想保留的,当你真正理解好,我用吨内存的东西,我知道为什么我使用的内存,我真的需要使用内存。这是你应该使用这个大的堆选项的唯一一次。所以我提到的垃圾回收。而且它需要更长的时间,当你有一个更大的堆。让我们来谈谈一点点关于垃圾收集。所以,我只是想通过一个快速的解释这里是什么垃圾回收做的。所以基本上,你有一组对象。首先,让我们在这里说,这些蓝色的物体,这些都是在你堆中的对象。他们形成一种图形。他们有相互之间的引用。其中一些对象还活着,他们中的一些不再使用。那么,在GC所做的就是从一组,我们称之为根对象开始。这些是GC知道对象是活的。

例如,是活在一个线程堆栈变量,J和我的全局引用,我们把对象合子为堆,或根为好。因此,基本上,在GC开始这些对象并开始访问的其他对象。基本上,在整个图形遍历找出哪些对象从GC根直接或间接引用。在这个过程结束时,你有遗留下来的一些对象,其中GC从未访问过。这些都是你的垃圾。它们可以被收集。所以这是一个非常简单的概念。你可以看到为什么当我说你有更大的堆你将有较大的暂停时间。因为垃圾收集器主要有遍历整个现场的一组对象。如果你使用说一大堆选项,你有256兆堆,好了,这是一个很大的内存垃圾收集器走了过来。你会看到更长的暂停时间这一点。

我们虽然有一些好消息。姜饼,已经有垃圾收集了一些很大的变化,使事情变得更好。因此,在Gingerbre-对不起,预姜饼,垃圾收集器的状况是,我们必须阻止世界收藏家。所以,这是什么意思是,基本上,当垃圾收集过程中,您的应用程序已停止。而集合出发的所有应用程序线程都完全停止。这是一个相当标准的东西。这些暂停一般往往是很短。我们发现的是,暂停时间为堆了愈来愈大,这些都越来越成为一个有点太长了。所以,我们看到东西了50至100毫秒。如果你想建立一个真正响应的应用程​​序那种停顿时间是不是真的可以接受的。

因此,在姜饼,我们现在有一个并发的垃圾收集器。它大部分工作同时,这意味着你的应用程序没有停止垃圾收集的时间。基本上,我们有多数民众赞成在同一时间为您的应用程序,可以执行垃圾收集工作运行的另一个主题。你会看到基本上是两个短暂停。一次一个集合的开始和一个靠近端。但是,这些暂停时间将是非常非常低。通常你会看到两个,三个,四个,五个毫秒的暂停时间。所以这是一个显著的改善。暂停时间关于他们使用的是10%。所以这是一个非常好的变化,我们在姜饼。

现在,如果你正在构建内存沉重的应用程序,还有你使用了大量位图的一个很好的机会。我们发现,在很多应用程序,你有可能50或堆的75%是采取了由位图。而且因为你将要开发平板电脑的蜂窝,这变得更加恶劣。因为你的图像是大至整个屏幕。所以蜂窝前,我们管理的位图的方式是这样的。因此蓝色区域在这里是Dalvik的堆和黄这对象是位图对象。现在,位图对象始终是相同大小的堆不管他们的分辨率。背衬存储器位图实际上被存储在另一个对象。这样的象素数据被分开存储。现在蜂窝前,我们做的这个像素数据实际上是本机内存。它使用了Dalvik的堆外的malloc分配。这有几个后果。如果你想释放此内存,你既可以调用的循环,这将释放内存同步。但是,如果你没叫循环,你得到的垃圾收集都在等待您的位图,我们不得不依靠终结以释放内存的支持为位图。如果你熟悉定稿,你可能知道,这是一个根本不可靠的过程。只是它的性质,它需要多个集合,通常是最后定稿完成。因此,这可能会导致问题与位图重应用程序,你不得不等待几个垃圾收集的像素数据被回收之前。这可能是一个很大的内存,因为位图的像素数据是堆的相当一部分显著。这也使事情变得更难调试。如果你使用像Eclipse的内存分析器标准内存分析工具,它不能真正看到这本机内存。你会看到这个微小的位图对象。当然,但是这并没有告诉大家。你不介意,如果你有一个10×10位。但是,如果你有一个512×512的位图这是一个很大的区别。最后,我们有这种方法的另一个问题是,它阻止世界的垃圾收集,以便回收后盾内存所需的完整,假设你没叫循环,那就是。

好消息是在蜂巢我们改变这种工作方式。和位图像素数据现在分配的Dalvik堆内部。因此,这意味着它可以同步通过在该位图被收集在同一周期由GC被释放。它也更容易调试,因为你可以看到像Eclipse内存分析器标准的分析工具,该内存的支持。而且我打算做一个演示在几分钟内,你会看到真正的,如何更加有用,这是当你可以看到内存。最后,这种策略是更适合并发和部分垃圾收集,这意味着我们一般可以保留那些暂停的次数。因此,这些都是两个最大的,我们在姜饼已经推出和蜂窝影响您的应用程序是如何使用内存的变化。

现在我想潜水的一些工具,你可以用它来更好地理解多少内存你的应用程序的使用。如果你有内存泄漏,更好地理解那些在哪里泄漏,通常,你的应用程序是如何使用内存。您可以使用您的理解应用程序的内存使用最基本的工具是看你的日志信息。因此,这些都是你在DDMS看到的logcat查看日志信息。你也可以看到他们在使用亚行logcat命令行。而每一个垃圾收集发生在你的进程时,你会看到一条消息,看起来像这样的。我只是想通过这个消息的不同部分,这样你就可以更好地理解它告诉你。

我们的第一件事就是为垃圾回收的原因。什么样的触发它,什么样的集合是它。这里这一次是一个并发收集。因此,一个并发收集是由基本触发,为您的堆开始填补,我们踢了我们的并发垃圾回收使之能完全有希望在你堆得到充分。

其他类型的集合,你会在日志消息看。气相色谱仪的malloc就是其中之一。这时候,比方说,我们没有完成并发收集的时间和你的应用程序不得不分配更多的内存会发生什么。堆得满满的,所以我们只好停下来,做一个垃圾收集。

你会看到GC外部ALLOC,这是外部分配的内存,就像我提到的位图像素数据。它也可用于直接NIO字节缓冲区。现在,这个外部存储器正如我所说,已经消失在蜂窝。基本上一切,现在的分配堆Dalvik的里面。所以你不会看到这个在蜂窝,后来日志消息。

您还可以看到一个消息,如果你做一个HPROF,如果您创建一个配置文件HPROF。最后,最后一个我想提的是GC明确。你会看到,这通常当你调用System.gc,这是,你真的知道你应该避免做一些事情。一般情况下,你应该相信,在垃圾收集器。我们已经得到了一些信息,也左右的内存被释放在这个集合的金额。有关于堆的一些统计数据。因此,在这种情况下,堆,为65%的分类收集完成之后。有活动的对象大约三个半兆这里总堆大小列出。这几乎是10兆,9991 K.有关于外部分配​​的内存,这是位图像素数据,并且还,NIO直接字节缓冲区的一些信息。两个数这里,第一个数字是外部存储器的程序已经分配的量。第二个数字是一种软限制。当你分配了多少内存,我们将揭开序幕GC。最后,你会看到暂停时间为收藏。而这正是你会看到堆大小的影响。较大的堆将会有较大的暂停时间。

好消息是,对于并发收集,你会看到一般相当低,这些暂停时间。并发集合要显示两个暂停时间。还有一个短暂的停顿在集合的开始,一个大多数的方式通过。非并发集合,你会看到一个暂停时间,而这通常是将是一个相当高一点。所以,在看你的日志信息是了解您的应用程序使用多少内存真正的基本途径。但它并没有真正告诉你,在这里我使用的内存?什么对象是使用这块内存?

而要做到这一点的最好办法是使用堆转储。所以,堆转储基本上是一个包含所有在堆中的对象信息的二进制文件。你可以使用DDMS通过点击图标上,这个有些神秘的图标来创建一个堆转储。我认为,在以前的谈话中提到它。还有用于创建堆转储的API。在一般情况下,我发现使用DDMS是罚款。还有,当你想在某个时间非常非常具体的点来创建一个堆转储次。也许当你试图追踪内存泄漏。因此,它可以有助于使用该API。您可能需要堆转储转换为标准格式HPROF。你只需要做到这一点,如果你正在使用DDMS的独立版本。如果你正在使用Eclipse插件,在ADT插件,它会自动将其转换。但转换是非常简单的。有一个在Android SDK中,你可以用它来做到这一点的工具。你已经将它转换为标准格式HPROF后,您可以用标准的堆分析工具,如垫或分析与jHat它。

而且我要告诉MAT的例子,这只不过是Eclipse的内存分析器较短的方式。而之前,我跳进了演示,我想谈谈内存泄漏。因此,有那种在管理运行时,你不能有内存泄漏误解。我敢肯定,你们知道这不是真的。有一个垃圾收集器不会阻止内存泄漏。在管理运行时内存泄漏是一个有点不同,虽然,比C或C ++内存泄漏。基本上,泄漏是当你有一个对真实防止被垃圾收集该对象未使用的对象。有时你可以有一个引用一个对象,但对象指向了一堆其他对象。基本上,该单一参考防止一大群的对象被收集。

有一点要注意在Android中。我看到有时人们和我做这个自己,通过持有长住参考活动意外地创建了内存泄漏。所以,你需要非常小心与和也许是你持有的参考上下文,这就是发生了什么。还可以通过保持长寿命引用视图或一个可拉伸的,因为这些也将召开参照它们最初在活动做到这一点。其原因,这是一个问题,原因,这会导致一个存储器泄漏是这样的。所以,你有你的活动,它包含了ViewGroup中,一个或LinearLayout中的东西,它包含了一些看法。我们已经得到了从框架的引用当前可见的活动。

但在Android的,当你有一个旋转的事件,所以你旋转你的设备,我们做什么,实际上是建立一个新的视图层次,因为你需要加载新的资源,你可能有一个全新的布局,横向或纵向,你可以有不同大小的图标或位图。然后我们基本上删除提及旧观点的层次结构和指向新的。而这个想法是,这个老视图层次确保得到垃圾收集。但是,如果你持有一个引用,你要防止它得到垃圾收集。这就是为什么这是一个问题持有长住引用某项活动或甚至一个看法,因为事实上,连接这些对象的箭头应该会在两个方向。因为你已经得到了指针一路上涨。所以,如果你有内存泄漏,一个真正的好办法找出其中使用Eclipse的内存分析器。

我打算做一个演示,但我想先介绍一些内存分析器背后的概念,所以,当我做的演示中,你会更好地了解我向您展示。所以Eclipse的内存分析器可以从eclipse.org网站下载。它有一对夫妇的口味。有一个Eclipse插件的版本,也有一个独立版本。我将要展示的独立版本。我个人不喜欢让Eclipse有所有这些不同的插件。那种我喜欢让事情一点点分开。但他们基本上是相同的。现在,内存分析器有,你会看到很多的一些重要概念。

它谈论浅堆与堆保留。因此,一个对象的浅堆是多么大的是这个对象,它的大小和字节。这真的很简单。所以我们可以说,所有这些对象都是100字节。所以他们浅薄堆是100个字节。这很简单。保留的堆是不同的东西。基本上,保留堆说,如果我有一个对象在这里,我要释放这个对象,还有什么其他的对象是指向?和那些可能被释放,在同一时间?所以你计算出保留堆而言,什么是对象的总大小,可以通过释放这一个对象被释放?因此,也许这是最好的理解用一个例子。所以这个对象向下右侧的黄色,这家伙不指向任何其他对象。所以他的保留大小是很容易计算。他保留堆是100。这家伙在上面,他有一个指向另一个对象。但他并不持有该对象还活着。还有其它指针到相同的对象。原来这家伙的保留堆也只有100个字节。因为如果我们要删除此对象,它不会释放任何其他对象。对象降底然而,它基本上保持所有的其他对象还活着。因此,它保留了堆是400,因为如果我们能释放的对象,我们可以释放所有其他对象很好,在这张幻灯片反正。所以,你可能想知道,你怎么去计算保留堆?

所以,你会看到这个内存分析器。而实际上,知道它是如何计算的保留堆是非常有用的。所以内存分析器使用称为分母树的概念。这是从图论的一个概念。基本上,如果你有一个节点A和节点B,A被认为是B的支配,如果每一个路径到B经过A.所以,你可能会看到如何可以帮助我们找出一个对象什么样的保留堆是。所以这里的另一个例子。所以,让我们开始与A.这是一种根本的。 B和C都只能通过A.访问所以这是非常简单的。他们是一个和孩子的支配树。 E是也只能通过访问C.所以它是C的支配树一个孩子。 D是一点点有趣的在这里。 D可通过B或C访问,但是A是每一个路径D上因此,这意味着A是D的家长和支配树。现在你会看到这个支配树概念还弹出内存分析器在其UI。而且它可以为跟踪内存泄漏真的很有帮助。

因此,让我们跳,并做好调试和内存泄漏演示用的垫子。那么,我要使用这个演示是蜂窝画廊的示例应用程序。它自带了Android SDK的基本上只是演示了蜂窝状的功能的简单应用。真的,全部是是一个小的应用程序,你可以通过页面的一些照片。很简单。现在,我已经做了一些那种调皮这里。我已经介绍了内存泄漏到这个应用程序。我会告诉你我是如何做到这一点。对不起,我最好再切换到幻灯片。

所以,你会看到这里,我的源代码,从活动的源代码的摘录。所以,我在这里做的是我介绍的这款名为泄漏内部类。这是不是一个静态内部类。所以,你可能知道,如果你创建一个非静态内部类,它实际上有一个参考,封闭的对象。这是因为,从一个非静态内部类,你其实可以参考封闭对象的实例变量。所以它要保留提及的活动在这里。这很好,只要此对象不超过活性活得更长。但是我这有静态字段和静过得比任何特定实例更长的时间。在我上创建方法,我做了什么被实例化泄漏对象,并保存成静态字段。所以,如果你想成为能够想象这一点,我基本上得到了我与主活动开始视图层次。我这个实例化对象漏水,他有一个参考的主要活动,因为这是它的封闭类。最后,我的主要活动类,这在概念上是不同区域的内存比任何特定实例。并有一个静态变量指向泄漏对象。因此,也许你可以看到这是怎么回事导致内存泄漏,当我旋转屏幕。因此,让我们跳,并看看此内存泄漏。所以,如果你想弄清楚是否有内存泄漏,最简单的方法之一是正中下怀看看你的日志信息。所以,我只是要做到这一点。我要在命令行做到这一点。我可以只输入logcat的。我希望把它限制在,我已经来到这里运行的特定进程。我不希望看到所有的日志信息的系统。所以我只是要抢的进程ID。在那里,我们看到了一堆日志信息,包括一些垃圾收集信息。你想看看数基本上是在这里的9805K的第一个数字。在堆大小的第一个号码。这是系统中的活动对象的量。如果你正在寻找一个内存泄漏,这就是你想要看什么。所以,我要翻阅一些照片在这里。你会看到,这个数字保持不变漂亮。我们最多9872.但基本上,堆的使用量是相当恒定的。现在,当我旋转这个设备,我们将是一堆垃圾收集的发生。这堆使用率上升,它不下去了。因此,我们堆现在高达12兆。因此,我们对泄露两年半兆。所以,当你看到你的记忆上去实物这样的阶跃函数,它的步骤了,只是永远不会退缩,这是一个好兆头,你有内存泄漏。

所以,一旦你知道你有泄漏,你会想要做的就是创建一个堆转储,所以你可以去调试它。所以我要做到这一点现在。我会打开DDMS。你只需要选择您所关心的进程,点击这个图标了工具栏,上面写着转储HPROF文件中。这将创建一个堆转储。它需要几秒钟,因为它基本上倾销一个巨大的二进制文件到磁盘。然后我可以将它保存在一个名为dump.hprof文件。然后,因为我使用DDMS这种单机版在这里,我需要转换这个文件。正如我提到的,如果你使用的ADT插件,Eclipse和使用DDMS在那里,你不需要去通过这个转换步骤。但它真的很简单。现在,我已经将它转换,我可以打开Eclipse的内存分析器,并看看这堆转储。所以没有太多的内存分析器来看看,直到你已经打开了一个堆转储,我们可以只从文件菜单做。打开堆转储。我会打开这个​​转换堆转储,我刚才创建的。并不需要很长的它加载了起来。

你会看到的第一件事就是这个饼图。这是通过保留尺寸示出在系统中的最大的对象。现在,这本身并不能真正告诉我们太多。你可以看到,倒在底部离开了这里,当我把鼠标移到馅饼的各种片,它告诉我什么样的对象我有。但是,这并不真正告诉我们太多。如果我们想获得一些更多的信息,你想在这里往下看。有两种观点。直方图视图和支配树。而这些是那些我觉得最有用,我要告诉你。让我们来看看支配树。你还记得我解释这个概念。这是怎么能在跟踪内存泄漏是有用的。所以,我们在这里得到的是基本情况列表或对象在这个系统中组织了一个清单。这里有个列。通过保留量堆组织。所以,当你有一个内存泄漏,看着保留堆量往往是看的东西,因为那将有您正在使用多少内存的最大效果的好方法。机会是,如果你已经注意到,您已经有了一个泄漏,你漏一个显著量。所以,让我的Xoom在这里。希望你们可以看到这是一个好一点。所以,在列表的最上方,我们拥有的资源类。这不是太奇怪,因为我们的资源,我们需要加载大量位图。这是怎么回事持有大量内存活着。没关系。这两个位图对象是有趣。我有这两个大的位图,每超过两年半兆。这很有趣,因为这听起来是一样的记忆,我是泄漏量。所以,如果我想有点进一步调查,我可以用鼠标右键点击其中的一个对象,然后选择路径GC根。我会选择不含弱引用,因为我想看看有什么保存的对象还活着。和弱引用是不会让它活着。因此,这开辟了一个新的选项卡,你怎么知道?它实际指向正确的我的泄漏。所以,当你创建你的应用程序漏洞,确保你的名字的东西真的有用这样的,所以你可以很容易地找到它。

听众:[笑]

PATRICK DUBROY:所以你们当中有些人可能已经注意到了这一点,如果有这个对象只有一条路径,因为这就是我可以在这里看到,为什么没有这样的泄漏对象出现在支配树?我提到的支配树应该告诉你他们的保留堆量最大的对象。和以及这是一个单一的对象,该对象是负责保持该位图。在这样的情况的原因是Eclipse的内存分析器,当它计算支配树,它实际上并没有分开处理弱引用。它基本上只是把它们像一个正常的参考。所以你会看到,如果我真的对上这个家伙再次单击,并说路径GC根,并说所有的引用,那么就实际上是另一个路径,这个对象。但是,这是一个弱引用。一般来说,你不必太在意弱引用,因为他们不会阻止你的对象被垃圾收集。但是,这就是为什么泄漏的对象并没有在支配树现身。所以支配树是真的,跟踪内存泄漏的真正有用的方式。另一件事我喜欢用的是柱状图查看。所以我提到在Android中,这是常见的通过保持长期居住提及的活动内存泄漏。所以,你可能想实际去看看你的主要活动类的,你有多少实例。

与直方图视图可以让你做到这一点。因此,直方图视图只显示所有的类在其系统的列表,现在它是基于浅堆由类系统占用总额来分类。所以在最高层那里,我们看到我们有一个字节数组。并且这样做的原因是,字节阵列现在背存储器为像素数据。你知道,这是为什么,我们现在有堆内部的像素数据是非常有用的一个很好的例子。因为如果你在姜饼或更早使用此,你不会看到字节数组顶部。由于与分配本机内存的内存。所以,我们也可以,如果我们关心这些字节数组对象,我们可能要右键单击它,并说与输入列表的引用对象。而现在我们得到了两个大的字节数组对象。我们可以用鼠标右键单击其中一个说,道路GC根,不含弱引用。原来这家伙看起来有几条路径,这让它活着。看起来没有什么与众不同的给我。而当你试图找到内存泄漏,有没有真正为你是怎么找到泄漏一个神奇的答案。你真的了解你的系统,并了解哪些对象是活的,为什么他们还活着,你的应用程序的各个部分中。但你会看到,如果我看这个其他字节数组对象,并再次,做路径GC根不含弱引用,那么,我又发现我的泄漏。因此,这是另一种方式,我可能已经发现了这个,如果不是从支配树那么明显。

直方图视图也可以帮助我们来看看我们的活动实例。因此,有很多明显的类系统中。我们的活动是不是在这里。有2200班。但幸运的是,Eclipse的内存分析器在顶部有这个方便的小过滤器视图。你可以开始输入一个正则表达式。它会返回你所有匹配的类。所以在这里,我们已经得到了我们的主要活动。它告诉我们,其实这主要活动的两个实例。这应该一种是红旗。通常情况下,你应该会看到你的主要的活动只有一个实例alive.Now我的屏幕旋转时所提到的,我们建立了一个新的层次来看,还有的将是一个短暂的时间,其中有两个实例活着。但在大多数情况下,你应该会看到一个在这里。所以,我可能会想,OK,这是一个危险的信号。

让我们一起来看看。所以,我可以用鼠标右键单击该对象并传入列表的引用对象。所以,我想看看我有什么情况下,什么是指向他们?所以,我有两个实例这里。如果我右键单击其中之一,并选择路径GC根,不含弱引用,我又发现我的内存泄漏。而在看这个,我可能会意识到,哦,我真的不打算这样做。我不是故意要保持这种参考那里。所以这是另一种方式,你可以找到泄漏。所以,现在我们已经发现在那里我们的内存泄漏,我们为什么不真正先走,并修复它。所以在这种情况下,问题是,我们有一个非静态内部类。因此,我们可以通过一个静态内部类解决这个问题。然后它不会真正保持引用到封闭活动。我们可以做其他的事情,其实只是没有将其存储在一个静态变量。所以这是很好,如果这个漏水的对象引用了活动,只要不超过该活动活得更长。因此,让我们做到这一点。我们只是使这个普通实例变量,而不是一个静态的。于是我可以去这里重新编译这一点,它推送到设备。并希望,我们应该看到,我们的内存泄漏已被淘汰。对不起,其实我们想要做的是看我们日志输出才能看到,我们正在使用多少内存。所以我只是要火起来的过程在这里,看看进程ID。再次,只是做亚行logcat只是这一进程。从而通过我的照片再次页面,我们看到了很多GC消息。当我旋转,我们将看到内存使用率上升一分钟出现。但经过几次的集合,它不下来回到其先前的值。因此,我们已经成功地消除了泄漏存在。这是伟大的。你总是想消除内存泄漏。

这就是使用Eclipse的内存分析器来调试内存泄漏的一个例子。 Eclipse的内存分析器是一个非常强大的工具。这是一个有点复杂。它实际上是我花了相当长的一段弄清楚这些是适合这份工作的两个最好的工具。所以,你真的要当心这些内存泄漏。所以我在这里保留了长寿命参考活动的举了一个例子。如果你有我们的背景下,一个视图,绘制,所有这些事情你需要提防。不要拿着长寿命引用这些。它也可以发生在非静态内部类,这是我证明那里。可运行的实际上是一个有时会咬你。你知道,你创建一个新的可运行。你有那将到五分钟内的延期事件。如果用户旋转屏幕,可运行推迟即将举行之前活动的实例存活五分钟。所以这不是很好。

你也想​​留意缓存。有时候,你有一个缓存和你想保留记忆活着,这样就可以加载图像更快地让我们说。



谢谢。


(译文使用工具翻译,如有不对的地方请参考原文。)

友情提示:
信息收集于互联网,如果您发现错误或造成侵权,请及时通知本站更正或删除,具体联系方式见页面底部联系我们,谢谢。

其他相似内容:

  • ModernUI课程:定义一个Logo

    ModernUI教程:定义一个Logo ModernWindow的标题栏包含了一块区域用来显示自定义的窗体Logo: 这个窗体logo通过ModernWindow.LogoD...

  • Django忘记管理员账号和密码的解决方法

    Django忘记管理员账号和密码的解决办法 看着Django的教程学习搭建网站,结果忘记第一次创建的账号和密码了。结果搭建成功以后,一直...

  • GO语言小结(1)——基本知识

    GO语言总结(1)——基本知识 1、注释(与C++一样)   行注释://  块注释:/*   ...  */ 2、标识符   可以这么说,除了数字开头...

  • golang 惯用的文件读取方式

    golang 常用的文件读取方式 Golang 的文件读取方法很多,刚上手时不知道怎么选择,所以贴在此处便后速查。 一次性读取 小文件推荐一...

  • 查询深圳市通相关信息

    查询深圳通相关信息 用 HTTP.GET 从开放 API 中查询深圳通信息,然后将 JSON 数据存入结构体中,再格式化输出。 注意:获取的并不是实...

  • Go语言设计模式实践:结合(Composite)

    Go语言设计模式实践:组合(Composite) 关于本系列 这个系列首先是关于Go语言实践的。在项目中实际使用Go语言也有段时间了,一个体会就...

  • 列出索引和遍历目录

    列出目录和遍历目录 获取目录列表用 ioutil.ReadDir(),遍历目录用 filepath.Walk(),使用方法请参考文章示例。 示例代码: package ma...

  • io 包的惯用接口速记

    io 包的常用接口速记 我没有 C/C++ 基础,没有接口的概念,且从 Python 投奔而来,Python 的极简主义(一个结果往往只提供一个方法),让我在...

  • 代理服务扩充

    代理服务扩展 之前自己实现了一个代理服务,当时考虑的是只要支持SOCKS5就好了,因为我经常用CHROME,配合着SwitchySharp,体验还是很棒...

  • 文件的创造与打开

    文件的创建与打开 文件操作是个很重要的话题,使用也非常频繁,熟悉如何操作文件是必不可少的。Golang 对文件的支持是在 os package ...

热门推荐: