您现在的位置是:Meta unveils ‘segment anything model’ to identify objects in an image >>正文

Meta unveils ‘segment anything model’ to identify objects in an image

上海品茶网 - 夜上海最新论坛社区 - 上海千花论坛76748人已围观

简介By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.Several t...

By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.

Several tech companies have been experimenting with generative AI to improve user experience since the advent of artificial intelligence (AI). 

Meta unveils ‘segment anything model’ to identify objects in an image

On Wednesday, April 5, Meta introduced the Segment Anything Model (SAM), which hones the skill to identify and separate specific objects in images and videos.

“Segmentation — identifying which image pixels belong to an object — is a core task in computer vision and is used in a broad array of applications, from analyzing scientific imagery to editing photos,” according to the Meta release. 

Simply, it can recognize various objects in an image full of objects. The Meta demo showcases how the AI tool successfully identified each and every fruit in a photo of a box of fruits. 

See Also

Meta describes it as a "promptable system, " meaning it can receive user input via text or just a click.

The company also released the Segment Anything 1-Billion mask dataset (SA-1B), one of the largest segmentation datasets ever created. Based on this, the AI system has been trained on 11 million images and has identified over 1 billion masks. 

In the future, this AI software could help with a wide range of applications. Image segmentation technology can edit photos, analyze scientific images, be used in augmentation and virtual reality applications, and even be used to build larger AI systems.

Meta says, “Reducing the need for task-specific modeling expertise, training compute, and custom data annotation for image segmentation is at the core of the Segment Anything project.”

Furthermore, the tech giant has made this new tool open-source, meaning anyone can use it. Check out the demo to see SAM in action with your images.  

The company has also published a detailed paper, which can be found here.

Study abstract:

We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy-respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive — often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at \href{ https://segment-anything.com}{ https://segment-anything.com} to foster research into foundation models for computer vision.

Tags:

相关文章



友情链接