古风汉服美女图集

vinvino02/glpn-nyu

2023-12-27 01:09 0 微浪网
导语: GLPN fine-tuned on NYUv2 ...,

vinvino02/glpn-nyu


GLPN fine-tuned on NYUv2

Global-Local Path Networks (GLPN) model trained on NYUv2 for monocular depth estimation. It was introduced in the paper Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth by Kim et al. and first released in this repository.
Disclaimer: The team releasing GLPN did not write a model card for this model so this model card has been written by the Hugging Face team.


Model description

GLPN uses SegFormer as backbone and adds a lightweight head on top for depth estimation.


Intended uses & limitations

You can use the raw model for monocular depth estimation. See the model hub to look for
fine-tuned versions on a task that interests you.


How to use

Here is how to use this model:
from transformers import GLPNFeatureExtractor, GLPNForDepthEstimation<br /> import torch<br /> import numpy as np<br /> from PIL import Image<br /> import requests<br /> url = "http://images.cocodataset.org/val2017/000000039769.jpg"<br /> image = Image.open(requests.get(url, stream=True).raw)<br /> feature_extractor = GLPNFeatureExtractor.from_pretrained("vinvino02/glpn-nyu")<br /> model = GLPNForDepthEstimation.from_pretrained("vinvino02/glpn-nyu")<br /> # prepare image for the model<br /> inputs = feature_extractor(images=image, return_tensors="pt")<br /> with torch.no_grad():<br /> outputs = model(**inputs)<br /> predicted_depth = outputs.predicted_depth<br /> # interpolate to original size<br /> prediction = torch.nn.functional.interpolate(<br /> predicted_depth.unsqueeze(1),<br /> size=image.size[::-1],<br /> mode="bicubic",<br /> align_corners=False,<br /> )<br /> # visualize the prediction<br /> output = prediction.squeeze().cpu().numpy()<br /> formatted = (output * 255 / np.max(output)).astype("uint8")<br /> depth = Image.fromarray(formatted)<br />

For more code examples, we refer to the documentation.


BibTeX entry and citation info

@article{DBLP:journals/corr/abs-2201-07436,<br /> author = {Doyeon Kim and<br /> Woonghyun Ga and<br /> Pyunghwan Ahn and<br /> Donggyu Joo and<br /> Sehwan Chun and<br /> Junmo Kim},<br /> title = {Global-Local Path Networks for Monocular Depth Estimation with Vertical<br /> CutDepth},<br /> journal = {CoRR},<br /> volume = {abs/2201.07436},<br /> year = {2022},<br /> url = {https://arxiv.org/abs/2201.07436},<br /> eprinttype = {arXiv},<br /> eprint = {2201.07436},<br /> timestamp = {Fri, 21 Jan 2022 13:57:15 +0100},<br /> biburl = {https://dblp.org/rec/journals/corr/abs-2201-07436.bib},<br /> bibsource = {dblp computer science bibliography, https://dblp.org}<br /> }<br />


收录说明:
1、本网页并非 vinvino02/glpn-nyu 官网网址页面,此页面内容编录于互联网,只作展示之用;2、如果有与 vinvino02/glpn-nyu 相关业务事宜,请访问其网站并获取联系方式;3、本站与 vinvino02/glpn-nyu 无任何关系,对于 vinvino02/glpn-nyu 网站中的信息,请用户谨慎辨识其真伪。4、本站收录 vinvino02/glpn-nyu 时,此站内容访问正常,如遇跳转非法网站,有可能此网站被非法入侵或者已更换新网址,导致旧网址被非法使用,5、如果你是网站站长或者负责人,不想被收录请邮件删除:i-hu#Foxmail.com (#换@)

前往AI网址导航
1、本文来自 AIGC网址导航 投稿的内容 vinvino02/glpn-nyu ,所有言论和图片纯属作者个人意见,版权归原作者所有;不代表 本站 立场;
2、本站所有文章、图片、资源等如果未标明原创,均为收集自互联网公开资源;分享的图片、资源、视频等,出镜模特均为成年女性正常写真内容,版权归原作者所有,仅作为个人学习、研究以及欣赏!如有涉及下载请24小时内删除;
3、如果您发现本站上有侵犯您的权益的作品,请与我们取得联系,我们会及时修改、删除并致以最深的歉意。邮箱: i-hu#(#换@)foxmail.com

2023-12-27

2023-12-27

古风汉服美女图集
扫一扫二维码分享