Sheng Ye | Computer Science | Best Researcher Award

Sheng Ye | Computer Science | Best Researcher Award

Mr Sheng Ye, Tsinghua University, China

Mr. Sheng Ye 🎓 is a talented researcher in advanced computer science, specializing in deep learning and computer vision. Graduating in the top 15% from Tsinghua University with a GPA of 3.89/4.0, under the guidance of Prof. Liu Yongjin, he quickly established himself as a promising talent. His award-winning project on real-time video stylization 🏅 received the “Best Practice Award” from Kuaishou and Tsinghua University, and he has been honored with multiple scholarships, including the prestigious “Jiukun Scholarship.” Known for his impactful publications 📑 and contributions to academic conferences, Mr. Sheng Ye is well-positioned to excel in research.

Publication Profile

Scopus

Education Background 🎓

The candidate holds a strong academic record in advanced computer science, focusing on deep learning and computer vision. Graduating among the top 15% from Tsinghua University with a GPA of 3.89/4.0, they were supervised by Prof. Liu Yongjin. Recognized as an exemplary graduate, their academic achievements reflect a dedication to excellence. Early accolades include ranking within the top 10 of their grade and excelling in the national entrance exam with a score of 703. This foundation underlines their exceptional knowledge base and capability in scientific research.

Research Focus and Achievements 🔬

The candidate’s research spans innovative deep learning techniques and computer vision applications. A notable project on real-time video stylization was awarded the “Best Practice Award” by Kuaishou and Tsinghua University. Additional distinctions include winning first prize at the 16th Image and Graphics Technology and Applications Conference (IGTA). Their publication record is further strengthened by multiple scholarship awards and recognitions, including the prestigious “Tsinghua Friends – Jiukun Scholarship” in 2022–2023. This research-oriented focus positions the candidate as a strong contender for the Best Researcher Award.

Professional Experience and Contributions 💼

Through internships and student roles, the candidate has significantly impacted Tsinghua’s computing community. Leading publicity efforts in the computer science department, they manage the “JiXiaoYan” public account, curating content across various academic themes. Their professional involvement also extends to reviewing for prominent conferences and journals like CVPR, AAAI, NeurIPS, and ECCV. This experience illustrates their commitment to academic development and a thriving research community.

Key Publications 📑

  • 2024: DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation – ACM Transactions on Graphics, 43(4) 📊
  • 2024: O2-Recon: 3D Reconstruction of Occluded Objects – AAAI Conference on AI, 38(3) 🖼️
  • 2024: Online Exhibition Halls with Virtual Agents – Journal of Software, 35(3) 🌐
  • 2024: Fine-Grained Indoor Scene Reconstruction – IEEE Transactions on Visualization 📐
  • 2023: Virtual Digital Human for Customer Service – Computers and Graphics, 115 🎭
  • 2022: Audio-Driven Gesture Generation – Lecture Notes in Computer Science, 13665 🎶

Publication Top Notes

DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models

O2-Recon: Completing 3D Reconstruction of Occluded Objects in the Scene with a Pre-trained 2D Diffusion Model

Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement

Generation of virtual digital human for customer service industry

Audio-Driven Stylized Gesture Generation with Flow-Based Model

Conclusion 🏆

The candidate’s robust educational background, innovative research, and active participation in academic communities distinguish them as a prime candidate for the Best Researcher Award. With numerous accolades, impactful publications, and a track record of community engagement, they are set to make meaningful contributions to the fields of deep learning and computer vision.