Deep Learning Overview for Medical Images | Deep Learning Webinars 2020, Part 6
From the series: Deep Learning Webinars 2020
Deep learning is a principle technology enabling remarkable advancements in artificial intelligence (AI). While you may be aware of mainstream applications of deep learning, like your Xfinity voice remote, Siri, or Alexa, how well acquainted are you with AI applications in medical engineering and science?
bat365 developers have applied deep learning functions in MATLAB® to engineering and science workflows. Get an overview of deep learning using MATLAB through the lens of a cardiac MRI example. You will see:
- Where deep learning is being applied in medical engineering and science and how it is driving the development of MATLAB
- How you can research, develop, and deploy your own deep learning application
- What bat365 engineers can do to help support your success with deep learning
You will also see technology demonstrations including:
- Semi-automating ground truth image labeling
- Training and evaluating a semantic segmentation algorithm on magnetic resonance imaging
- Generating optimized native embedded code
So today's agenda is going to go into, basically, three primary chapters of today's presentation. We'll have a quick overview of just deep learning in general for engineering and science or what we have been doing as a company for deep learning on images, and then we'll actually go into an actual example. We'll follow through developing an actual deep learning solution in MATLAB before wrapping up with where you can go to find some deep learning support at bat365.
So just a kind of really, really high-level overview here-- deep learning is a technology that's been really driving that AI megatrend. And since the 1950s, really, there's been a very large interest in artificial intelligence. And just for definition's sake, we're going to go with artificial intelligence being any technique that allows a computer or a machine to mimic human intelligence. That doesn't necessarily mean it needs to be intelligent in and of itself, it just needs to be able to trick a human in order to believe that the machine they're interacting with behaves with intelligence.
And it really wasn't until the 80s when we started figuring out algorithms and ways to cause the machine to actually learn tasks from data without explicitly being programmed to do those tests. And so that's sort of where things start to get really exciting. And then, bringing us to today's topic, using deep learning, using neural networks to learn tasks directly from the data, sort of just this higher-level, sophisticated way of performing AI using machine deep learning.
And so, deep learning is already part of our everyday lives. We see it when we're talking to our devices. I know that I have an Apple Watch, and I'm constantly telling it to remind me of the this or try to do that or set a timer for things that I always forget to do myself, face detection, and an especially exciting way to transform the commercial space, automated driving.
Very often, we tend to hear about machine learning and deep learning in the context of some kind of fun project that somebody is doing and then publishing it online, getting a lot of interesting, excitement from Lot’s of things like detecting objects-- a dog or a cat or a squirrel or whatever it is, and then, of course, extends to things like automated driving, identifying bicycles and pedestrians and other cars.
But if you take a look at the example so moving on to the image on the right, we can actually see an application of deep learning in engineering and science. So this example is taking a look at the production line, I believe it is, at Shell Industries. And they are sort of monitoring machinery and making sure everything looks good.
I've seen a lot of-- or spoken to a lot of customers who do things like-- I find, personally, one of the more frightening examples is monitoring surgeons during surgery where they actually will pay attention to whether or not a surgeon takes a scalpel, uses it, and then makes sure they put it back on the tray, because apparently occasionally a human might make a small error and leave a foreign object inside of a person. And so that was one of the more terrifying applications that I've heard of, but using machine learning and AI to hopefully mitigate that issue. So deep learning is really, really starting to penetrate into the world of today.
Here are another few examples of using MATLAB for deep learning in the industry. We've been applied in a number of ways, actually, for automated defect detection. I see it all over the place, and it's obviously very important for quality control-- also vehicle control and seismic event detection in a few of these other industries.
We also are very involved in research, so here are a couple of examples with the links if you're interested in pursuing some of this information. We will actually be doing a similar example to the one on the left here. We'll be taking a look at segmenting a region of interest inside of an MRI scan. Obviously, this is a tissue slice, so taking a look at doing some of these types of tasks.
So that was sort of an overview of deep learning and a little bit of what we've been doing in MATLAB so far. Sort of just a real quick timeline for how deep learning has evolved over time with MATLAB, and honestly, this timeline has existed well beyond 2016. We've had a Neural Network Toolbox inside of our tool offerings for years, until 2017, when we decided to change the name from Neural Networks to Deep Learning in order to kind of get with the times a bit there. And then you kind of see every year we've been adding more and more features in response to the field, really.
And some of these new features and new tools are just deep learning tools in and of itself-- creating neural networks, different types of layers, and weight sharing and training loops and things, but then also, support of features for things like code generation so that when you eventually need to deploy your deep learning algorithm that becomes possible in a very easy way. And now, halfway through 2020, we have a fairly sizable list already of new features. And then going into the second half of 2020, we're definitely going to be seeing significantly more, so keep your eyes peeled.
So here are just a couple examples, again, of deep learning for images and video-- automated driving, sign detection, being able to identify objects as you drive through a space, and then a semantic segmentation example, being able to pixel-by-pixel label objects of interest inside of a video. Here's another couple of examples, this time on signals. So I won't sit on this for too long, since our primary interest of the day is on images.
So I'll sit on this very briefly, but we do have a reinforcement learning toolbox. Is anybody here aware of the concept of reinforcement learning? Is that something you guys are familiar with, unfamiliar with, interested in, not interested in? We have one person who's familiar with reinforcement learning-- some interest, and he has worked with it. OK, great.
So for those of you who are not particularly familiar, this is like learning on a whole new level. I like to think of it as you're taking a baby and just plunking it in an environment and telling it try to do something and learn as you go. So it's like adaptive learning, if you will, just trying different simulations and eventually learning how to-- in the case on the right, walk. In the case on the left, navigate through traffic. So it's sort of like real-time learning.
So I won't sit too much longer on reinforcement learning. That's a whole other topic to get into. Let's take a look at focusing on deep learning for images.
So we've gone through a number of examples of domain-specific deep learning applications for engineering science that we can support. There are several other reasons why you might want to take a serious look at using MATLAB for deep learning. So we have multi-platform deployment solutions. The goal is to be able to deploy your deep learning algorithm anywhere onto whatever environments your final deployed system is going to be.
We have all sorts of features and tools for platform productivity, being able to accelerate using GPUs, working on the cloud, et cetera. And also, we support a lot of interoperability with TensorFlow in PyTorch. We understand that the world doesn't revolve around us. There's a lot of players in this space, and we do our best to make sure that we play friendly with the other kids on the playground.
And then at the center of it all are the people, right? So we've got folks that can support you. We have a really, really rich customer base that has been providing a lot of community tools, as well. So it really all circulates around the folks that are building these really great toolage.
So here's more of a preview of what we're going to see today, some information on platform productivity. So here are some tools that you'll get to take a look at for how to improve your productivity. But there's also something that we won't be speaking about today, but definitely something I encourage you to explore, is how we can connect to additional resources like AWS, Azure, Docker, and the NVIDIA container.
So to sort of to tie a nice little bow on the rest of bat365' involvement with deep learning, earlier this year we were very happy to have been labeled a leader, according to the independent Gartner Magic Quadrant for Deep Learning and Machine Learning Platforms, trying to create as complete of a platform as possible to support all of the needs for a typical deep learning workflow.
All right, now let's actually get into some of the meat of this. So the rest of the session is going to kind of follow these four buckets, if you will. And so we're going to take a look at these four steps in an AI-driven system design, starting with data preparation, moving in to actually training an AI model, a small dash of simulation and test, and then finally, a discussion on deployment.
So let's take a look at the problem of the day. For today's example, I promised some MRI and I promised some syntactic segmentation.
For those of you who may or may not be familiar with this Sunnybrook Cardiac data set, it is a publicly available data set of, I think, about 45 patients with various heart conditions and some expert-labeled contours of different parts of the heart. And so our goal today will be to segment, correctly, the left ventricle of these cardiac images. And we'll be actually taking advantage of a preexisting neural network that already has been trained, the VGG16 neural network, and adapting it to fit our use case here.
So, like I promised, we're going to start with data preparation. And some motivation here? So this was a slide presented by Mr. Karpathy at TrainAI conference in 2018. And he made the comment that when he was working on his PhD, he spent the most amounts of time, perhaps, focusing on developing models and algorithms, really focusing on the science of it all. And it's great because there's so many publicly available data sets to work off of when you're in research.
All of a sudden, when you move into the commercial space, that data is your treasure. It's your gold. So you actually spend a lot more time making sure you have correct data for your application and your goal and a little bit less time focusing on models and algorithms. So data preparation is actually quite a significant task, as, if you have been working with deep learning before this, I'm sure you've probably found out in some way, shape, or form.
So we're going to be taking a look at a couple of tools to help process data. And in particular, I'm going to try my best to highlight more of the tools that are there for medical image processing, as well, not just generic image processing. So let's get into today's example.
We'll do a real quick introduction to the MATLAB desktop. So this is the latest release of MATLAB, MATLAB R2020a. And this is our desktop. It's what you're going to see when you first open MATLAB.
For the most part, the primary panels that you interact with are here-- the command window. So you can create some variables, perform operations on those variables, create visualizations, et cetera. And for the most part, it's fairly straightforward. I kind of consider the command window kind of like my Google chats or my Skype or whatever because it's where you have a conversation with MATLAB, occasionally discover some neat little Easter eggs.
As you work with MATLAB, you do kind of keep track of your workspace there. That's just going to tell you what kind of data is available for you to work with. So now I know I have a few different things open. I'm able to play around with variables, like b or x or what have you.
And then, finally, you have a command history. This is exactly what it sounds like. It's a history of all the commands that you've used. So if I want to repeat any of my steps, I can go ahead and do that very quickly.
Especially for the newbies, and even for some MATLAB veterans, keep an eye on the tool strip up here. It's actually a really great place to discover new features in MATLAB to get things done. Very often, you're going to start off by importing data into MATLAB, right? And then I'm also extremely fond of the PLOTS tab because I'm very visual, and then we're going to spend a lot of time today living in the APPS tab. OK, so there is the overview of MATLAB, and we'll jump into the presentation.
So like I mentioned, we're going to be taking a look at cardiac images and we're going to be preparing some of that data. What I've done here-- and this is what you will see when the demo is sent to you-- is a project with a few folders-- Part 1, Part 2, Part 3. I think that's fairly self-explanatory. And when you open up the project, you actually will get the shortcuts to open those sections and it'll load up everything for you.
So here we'll go ahead and get started with the data set. This is an example of a MATLAB live script. By the way, if you're not familiar with it, a really good way to jump through the different sections is to select the section you're interested in and hit Run-- Run section here-- and that will kind of allow you to walk through the demo. So let's go ahead and get started.
Like I said, just trying to process the data or even load up the data can be a bit of a challenge sometimes. So here is my favorite way to go about this. I like to use all of the apps that are available in MATLAB. So now, I am very lucky. I happen to work at bat365, so I literally have every single toolbox that we create. Your list of apps might be a little bit smaller than mine.
However, we are working with images today, so I'm really only interested in this section of the Apps tab, and more specifically, we're working with DICOM, so I'm going to go ahead and open the DICOM browser to pull in my cardiac DICOM images. Go ahead and select the folder that my DICOMs live in, and we'll pull in our cardiac slices to view.
Pretty quick and easy. The images are set up in two time points for [? cistally ?] and [? asis ?] [? distally. ?] We can do a whole number of things. First of all, we can import that series into the workspace. And then we can actually have it available inside of MATLAB in order to perform operations on.
We can also do more to visualize it. We can actually view it in the volume viewer. One second, let me-- here we go. So we also have the volume viewer. I personally really like this, especially when working with medical images since so many medical images are volumetric in nature, right? And so I can explore my volume.
If I have the labels already, I can pull that in and you can actually see the overlay of the labels on my image. Manipulate the way to view the image however I want. And then also export the rendering. Export the way that I want to visualize my images so that I could perhaps recreate this visualization for a paper or for a report or for a presentation.
That's sort of just a quick introduction to getting started with DICOMs in MATLAB. We actually have a very rich collection of working with DICOMs-- a really rich collection of functions to work with DICOMs in our Image Processing Toolbox. And the way to find that would be to go to our "Read and Write Image Data files" section and you can find the full list right here. So it's a really nice resource to take advantage of, take a look at.
Now we've learned about how to bring in DICOMs into MATLAB, let's talk about how to organize data in preparation for deep learning. I find the best way by far, I think, is to work with imageDatastores. This is really a container. So I'm pointing to where my images are and I'm saying to create this container that points to where they are. Here's the file extension that I want to have.
And then now, if I were to take a look at this image data store that I've created, I have 105 images that I'm going to be processing, with a bunch of information about where they exist, what folders they live in, et cetera, and how to read them in. And through using a datastore, you now are sort of able to unlock a lot more functionality in the image processing and computer vision toolboxes. And one such example is to quickly load up images to label them in preparation for deep learning. So, to do that real quick, where are we-- Image Labeler.
And I'll do my best to identify regions of interest. So like I mentioned, we already actually have expert labels, but in the event that you do not have an expert to label your images for you, this is a way to kind of get started doing that. We have a number of options-- you can create a polygon and you can kind of try to grab your region of interest this way, depending on how much time you have.
But for me, what I really like about the labeler is that I can start off with maybe a smaller labeled data set, but if I have an algorithm or if I have a half-trained neural network that kind of does a good job already to find the region of interest, I can actually import that into the Image Labeler and sort of semi-automate the process and then just use my human time to double-check how good the labeling was, in post, before retraining on your neural network.
We've got a couple of questions. How would labeling even be handled for very large data sets? So, sort of in this fashion, being able to sort of semi-automate would be a really nice thing to be able to do. In some cases, it's possible. In some cases, it's not.
How many labels can you work with at a time and keep the task practically fast? That's a very good question, and that's one that entirely depends on your application. You might need a lot of labels or maybe you really only care about a few things.
If you're creating a self-driving car, you really maybe only need to worry about the things that you might reasonably expect to find on a road. But you wouldn't necessarily need to be able to identify-- I don't know, the difference between a house and a skyscraper, right? Just knowing that they're not on the road and you just need to stay on the road. So it depends on the nature of your problem, for sure. And then at the end of it, once you've got enough data that you can at least get started working on it, you export your labels and go ahead and get started with the actual training.
So this kind of brings us to the end of our data preparation. OK, so let's get started with talking about actually training an AI model. What's really nice is that we can start with basically a complete set of both algorithms and pre-built models when you're trying to tackle this problem in MATLAB. So here are just a few examples of the algorithms that are available for machine learning-- decision trees, Naive Bayes; in deep learning, CNNs, which is very, very common for image processing; GANs, LSTM, et cetera, and the list goes on.
We also support importing of pre-built models, which is what this example today is going to be utilizing, as well as many, many reference examples for both medical image processing and deep learning and other industry-specific examples that you can reference and build off of. What I like a lot about these reference examples is that very often, you can sort of just swap out the data set and really kind of get a lot of mileage from the code without really having to change a whole lot, so that's pretty nice.
OK, if you attended some of our prior series this week, the webinars, some of this might look very familiar. But we have a number of tools that are intended to make the process of creating your neural networks easier. So, for this example, this is the Deep Network Designer app, and so it's just a visual environment for creating or importing existing neural networks, modifying them, changing them, doing new things with them, and then exporting it for training. So it's really, really nice to be able to do this in a visual way, as well as just having that full list there. Sometimes when you kind of get started with a new toolbox, you don't always know what functions are available, but having that full list is really nice.
New in 2020A is the ability to also import data and perform the training in the same app, so it's actually now more of a workflow app rather than simply creating deep networks. So that's available there for you. And another thing to add, which I don't believe is available shown in this video, is the fact that you can actually export the code-- oh, it is shown in this video-- you can actually export the code for the training. So if you have a local data set that you're working on to kind of figure out how it works, then you want to expand or scale up the training to a larger data set, you can do that by generating the code and then rerunning it on the cloud or wherever your data is.
This is also pretty new. I actually am starting to fall in love with this particular app. So I definitely encourage folks to try it out. It's our Experiment Manager and basically, it just allows you to try a bunch of different training setups with different parameters and see what types of results you get. And what's really nice is it actually saves all of the different things that you've tried-- all the different things that you've tried, the different experiments, if you will. And that way, you can kind of refer back to it if you have to write up a report following this, you have all of that saved for you and now you can sort of reference all of that as you publish your work. That app was called the Experiment Manager.
All right, let me see-- catch up with some questions. And yes, the slides will be shared with the audience after this presentation. OK, continue on.
We also support hardware acceleration, as well. I mentioned this-- I won't be able to have time to talk too in detail about this, but basically just wherever you want to perform your training-- scaled up, scaled down, on a GPU, what have you, that is available for you to play with. And then, if there are questions on this, I definitely want to divert that more towards the end, but if interoperating with other frameworks-- pulling in TensorFlow or PyTorch neural networks into MATLAB or exporting to PyTorch or TensorFlow, et cetera, all of these things-- that is possible through our support of the ONNX framework. So that's something that we can discuss during the Q&A if there's interest.
The Experiment Manager, [? mron, ?] does work only with the neural networks. It's part of the Deep Learning Toolbox. But we do have some offerings for more traditional machine learning things that can do similar things like Bayesian optimization, so that would be something to explore perhaps in a different session, if that is of interest.
The Experiment Manager is automatically linked. So Melissa's asking a great question, is the Experiment Manager automatically linked to the working project? And, yes, it is. And so it's actually your Experiment Manager project is your MATLAB project, so that's pretty convenient, sort of all baked in together.
All right, let's take a look at modeling real quick. This is going to be a little bit lengthy, but we'll do our best to stay on target. OK, so back to left ventricle segmentation. Here is Part 2, the process of modeling. All right, let's get started here.
So at the beginning here, we're doing the exact same thing that we did the last time that we pulled in data into MATLAB. We're using our imageDatastore to point to where that data lives. Let me show the outputs, we can actually see this running in real-time. And again, we have about 805 images that we're going to be processing. And we've actually specified exactly how we want the DICOMs to be read.
In another folder, we also have ground truth masks. And so rather than creating an imageDatastore of the ground truth masks, we're actually going to tell MATLAB that these are the pixel labels, and we'll be creating a pixelLabelDatastore in order to do that. And so now it understands that everything contained in these folders are the pixel labels. They have two classes-- background and the left ventricle volume, et cetera, et cetera, and now that is defined.
So one of the nice things-- I love datastores so much. Some of the reasons why I like datastores is it gives me that nice, high-level overview of what's going on in my giant data set of over 800 images. So now I can very quickly see I have sort of an unbalanced data set, way more background pixels than I have of the left ventricle volume pixels, and that could be something that I need to take into account later on as I train or prepare to train my neural network. Yes, and the labels are stored separately [? mron, ?] very good question.
So here, just to kind of orient ourselves with what the images actually look like, I just created my own function that will randomly select an image and show the contour, the pixel labels against the cardiac image. So we've got slices all across the heart. Let's see if I can get a couple of different ones here. They're all smack dab in the middle right now.
All right, here we go. So here's more towards the apex of the heart. Some are more-- oh that one's way down at the apex of the heart, and that one's even further down the apex. And then some are in the middle. So we've got some diversity in our images that could be something that will affect the way that our neural network learns.
So very quickly, I know that in the slides I showed a video of how to perform training through an app. Now, this example is going to take us through how to perform the training through programmatic lines of code. And so the key function that I want everybody to kind of pay attention to is the trainNetwork function. It takes in three inputs-- the data source, so we actually have to combine our two datastores into a single data source to give to the train network; the neural network itself, the layer graph-- that's what this lgraph stands for; as well as the training options, the training hyperparameters. So those are going to be the three things the next section is going to work on to put together for us.
This is more of an aside, but you can perform data augmentation of things like rotating the image, translating the image, scaling up or scaling down of images as a way to enhance or augment the data that you're working with, especially if you're working with a fairly small data set. This could be something that would be of interest to use. It's fairly easy to create a data augmenter with all the different types of augmentation that you want to have and then combine them all together into your data source with this additional data augmentation specified. So that's our data source. We get that now.
Now, there's one little step to pay attention to, and this is important, especially when you're kind of trying to prove that you created a neural network that can do the job even on data it's never seen before. And that is partitioning your data set into a training, a validation, and a testing set. So we're doing that very briefly. You see the distribution of the images there.
Now we're going to create the neural network. So I'm doing this very quickly and easily. I'm just creating a segment layer graph using VGG16. If you're at all curious, you can open up documentation on this function to find out a bit more.
We're taking a look at the label imbalance. We talked about how there are way more background pixels than left ventricle volume pixels. And so one of the things you can do is you can work on the class weights in that pixel classification layer, that last layer of your neural network, and use that as a way to penalize overclassification of the background pixel.
Another nice new feature, actually, that you can do is use dice as your metric for accuracy. And that actually accommodates for possible imbalances inside of semantic segmentation problems. And so instead of creating weights, I'm actually just going to use a DicePixelClassificationLayer. So basically, I've removed the last layer and added my new layer to the end, and now I have a neural network that can perform semantic segmentation and also this in-built way to deal with a biased data set.
Skip this section. This just does a visualization and a double-check to make sure the neural network looks good. And then now we're onto that very last input to the train network function, which is just to set up those hyperparameters for executing the training. Now, I am not going to perform the training because the last time I did that, it took about 10 hours, which we definitely don't have time for. And instead, I'm just going to show you what it looks like to perform that training.
So when I run that line of code, it'll give me this nice plot and tell me over x amount of time how high my accuracy is going. So I'll take the last one. Am I running on my server or on the cloud? That's an excellent question, and I'll actually use that as an opportunity to highlight something.
In this case, I just set it to automatic, and so just whether or not I have my own parallel pool or whatever resources my computer can detect, it will run there. Now, what's nice about our deep learning tools is that we've been really trying to be inclusive of how you get your training done. So you can perform training on your single CPU or a GPU that happens to be connected to your computer, and you can also connect to a parallel pool, and that is where you can now start accessing the cloud. So normally when I train this, I actually will select parallel in order to send the work to my cluster that I've created on our cloud center.
So I'll end over here. Basically, once a neural network is fully trained, we can actually pull in our test data set and perform the semantic segmentation on that, process the images, and then also utilize this nice built-in function for evaluating performance of my neural network on the test data set. This is going to take a little bit of time to run, so I'm actually not going to complete this example, but you are definitely welcome to take a look at the results when I send the final results to you.
OK, moving on, we've got-- I do want to take a second to talk about simulation and testing, especially within the context of medical imaging because I know that a lot of folks, especially in the medical device space, have been asking questions-- things like, what does it mean to bring AI into a clinical setting? What does it mean to the FDA in order to have devices that have AI algorithms on it? And so it is something to pay attention to, to think about, and what kinds of validation and verification you need to perform on your AI.
And the answer is sort of wishy-washy. We don't really know yet. The FDA is still kind of deciding how it wants to approach AI systems on their devices. I know that they've approved of a few, but certainly none of the autonomous systems, as far as I'm aware of, without having any human intervention. So interesting thing to kind of keep an eye on, keep a pulse on. But that's a side note here.
I'll be flying through this a little bit quickly, but we do have ways to deploy your final neural network. You saw that at the very end of this example we have, I have saved a neural network, basically. So this neural network that I've decided that's my final iteration. How do I take that and put it on my final deployed solution, be it embedded or a server somewhere, wherever that happens to be? And so we have solutions for that, and I do have an example for one such solution right here, and I'll go through that very quickly right now.
So basically, this example is set on top of this particular doc example, so when you do receive the code, you can open up this doc example. Basically, the important part here is really just reading the prerequisites because there's some setup that needs to be done. You want to set up your semantic segmentation network to say run natively on a GPU. There are a few additional libraries and things to process.
But at the end of the day, what's important is that you have a way to deploy your final network. So here is an example of, say, an algorithm that I would want to put on my final device, load it up, predict, here's my algorithm. And what's lovely is that we can, first of all, do this automatically. We'll automatically generate the code and this is an example of what that final code might look like.
So this is all generated automatically by MATLAB-- pretty nice. I can kind of see how it's broken down. You can actually see all of my comments from the code itself is also in the final result, and I can have a nice overview of what my results are. That code is ready to deploy. Take it to wherever it needs to go.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other bat365 country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)