Dmitry Ryumin

DmitryRyumin

AI & ML interests

Machine Learning and Applications, Multi-Modal Understanding

Organizations

DmitryRyumin's activity

posted an update 3 days ago
view post
Post
1647
šŸš€šŸŽ­šŸŒŸ New Research Alert - CVPR 2024 (Avatars Collection)! šŸŒŸšŸŽ­šŸš€
šŸ“„ Title: Relightable Gaussian Codec Avatars šŸ”

šŸ“ Description: Relightable Gaussian Codec Avatars is a method for creating highly detailed and relightable 3D head avatars that can animate expressions in real time and support complex features such as hair and skin with efficient rendering suitable for VR.

šŸ‘„ Authors: @psyth , @GBielXONE02 , Tomas Simon, Junxuan Li, and @giljoonam

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ“„ Paper: Relightable Gaussian Codec Avatars (2312.03704)

šŸŒ GitHub Page: https://shunsukesaito.github.io/rgca/

šŸš€ CVPR-2023-24-Papers: https://github.com/DmitryRyumin/CVPR-2023-24-Papers

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #3DAvatars #RealTimeRendering #RelightableAvatars #3DModeling #VirtualReality #CVPR2024 #DeepLearning #ComputerGraphics #ComputerVision #Innovation #VR
posted an update 5 days ago
view post
Post
716
šŸš€šŸŽ­šŸŒŸ New Research Alert - InstructAvatar (Avatars Collection)! šŸŒŸšŸŽ­šŸš€
šŸ“„ Title: InstructAvatar: Text-Guided Emotion and Motion Control for Avatar Generation šŸ”

šŸ“ Description: InstructAvatar is a novel method for generating emotionally expressive 2D avatars using text-guided instructions, offering improved emotion control, lip-sync quality, and naturalness. It uses a two-branch diffusion-based generator to predict avatars based on both audio and text input.

šŸ‘„ Authors: Yuchi Wang et al.

šŸ“„ Paper: InstructAvatar: Text-Guided Emotion and Motion Control for Avatar Generation (2405.15758)

šŸŒ Github Page: https://wangyuchi369.github.io/InstructAvatar/
šŸ“ Repository: https://github.com/wangyuchi369/InstructAvatar

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #InstructAvatar #AvatarGeneration #EmotionControl #FacialMotion #LipSynchronization #NaturalLanguageInterface #DiffusionBasedGenerator #TextGuidedInstructions #2DAvatars #VideoSynthesis #Interactivity #ComputerGraphics #DeepLearning #ComputerVision #Innovation
posted an update 8 days ago
view post
Post
1417
šŸ”„šŸš€šŸŒŸ New Research Alert - YOLOv10! šŸŒŸšŸš€šŸ”„
šŸ“„ Title: YOLOv10: Real-Time End-to-End Object Detection šŸ”

šŸ“ Description: YOLOv10 improves real-time object recognition by eliminating non-maximum suppression and optimizing the model architecture to achieve state-of-the-art performance with lower latency and computational overhead.

šŸ‘„ Authors: Ao Wang et al.

šŸ“„ Paper: YOLOv10: Real-Time End-to-End Object Detection (2405.14458)

šŸ¤— Demo: kadirnar/Yolov10 curated by @kadirnar
šŸ”„ Model šŸ¤–: kadirnar/Yolov10

šŸ“ Repository: https://github.com/THU-MIG/yolov10

šŸ“® Post about YOLOv9 - https://huggingface.co/posts/DmitryRyumin/519784698531054

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #YOLOv10 #ObjectDetection #RealTimeAI #ModelOptimization #MachineLearning #DeepLearning #ComputerVision #Innovation
  • 1 reply
Ā·
posted an update 9 days ago
view post
Post
1488
šŸš€šŸŽ­šŸŒŸ New Research Alert - Gaussian Head & Shoulders (Avatars Collection)! šŸŒŸšŸŽ­šŸš€
šŸ“„ Title: Gaussian Head & Shoulders: High Fidelity Neural Upper Body Avatars with Anchor Gaussian Guided Texture Warping šŸ”

šŸ“ Description: Gaussian Head & Shoulders is a method for creating high-fidelity upper body avatars by integrating 3D morphable head models with a neural texture warping approach to overcome the limitations of Gaussian splatting.

šŸ‘„ Authors: Tianhao Wu et al.

šŸ“„ Paper: Gaussian Head & Shoulders: High Fidelity Neural Upper Body Avatars with Anchor Gaussian Guided Texture Warping (2405.12069)

šŸŒ Github Page: https://gaussian-head-shoulders.netlify.app

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #3DModeling #NeuralAvatars #GaussianSplatting #HighFidelityAvatars #3DReconstruction #AvatarRendering #TextureWarping #ComputerGraphics #DeepLearning #ComputerVision #Innovation
posted an update 12 days ago
view post
Post
1728
šŸš€šŸ¤–šŸŒŸ New Research Alert - CVPR 2024! šŸŒŸšŸ¤–šŸš€
šŸ“„ Title: RoHM: Robust Human Motion Reconstruction via Diffusion šŸ”

šŸ“ Description: RoHM is a diffusion-based approach for robust 3D human motion reconstruction from monocular RGB(-D) videos, effectively handling noise and occlusions to produce complete and coherent motions. This method outperforms current techniques in various tasks and is faster at test time.

šŸ‘„ Authors: Siwei Zhang et al.

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ“„ Paper: RoHM: Robust Human Motion Reconstruction via Diffusion (2401.08570)

šŸŒ GitHub Page: https://sanweiliti.github.io/ROHM/ROHM.html
šŸ“ Repository: https://github.com/sanweiliti/RoHM

šŸš€ Added to the CVPR-2023-24-Papers: https://github.com/DmitryRyumin/CVPR-2023-24-Papers

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #RoHM #HumanMotionReconstruction #DiffusionModels #3DAnimation #CVPR2024 #DeepLearning #ComputerVision #Innovation
posted an update 18 days ago
view post
Post
1020
šŸš€šŸ‘•šŸŒŸ New Research Alert - SIGGRAPH 2024 (Avatars Collection)! šŸŒŸšŸ‘ššŸš€
šŸ“„ Title: LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer šŸ”

šŸ“ Description: LayGA is a novel method for animatable clothing transfer that separates the body and clothing into two layers for improved photorealism and accurate clothing tracking, outperforming existing methods.

šŸ‘„ Authors: Siyou Lin, Zhe Li, Zhaoqi Su, Zerong Zheng, Hongwen Zhang, and Yebin Liu

šŸ“… Conference: SIGGRAPH, 28 Jul ā€“ 1 Aug, 2024 | Denver CO, USA šŸ‡ŗšŸ‡ø

šŸ“„ Paper: LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer (2405.07319)

šŸŒ Github Page: https://jsnln.github.io/layga/index.html

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #LayGA #AnimatableClothingTransfer #VirtualTryOn #AvatarTechnology #SIGGRAPH2024 #ComputerGraphics #DeepLearning #ComputerVision #Innovation
replied to their post 20 days ago
posted an update 20 days ago
view post
Post
1221
šŸš€šŸŽ­šŸŒŸ New Research Alert - AniTalker (Avatars Collection)! šŸŒŸšŸŽ­šŸš€
šŸ“„ Title: AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding šŸ”

šŸ“ Description: AniTalker is a new framework that transforms a single static portrait and a single input audio file into animated, talking videos with natural, fluid movements.

šŸ‘„ Authors: Tao Liu, Feilong Chen, Shuai Fan, @cpdu , Qi Chen, Xie Chen, and Kai Yu

šŸ“„ Paper: AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding (2405.03121)

šŸŒ Github Page: https://x-lance.github.io/AniTalker
šŸ“ Repository: https://github.com/X-LANCE/AniTalker

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #AniTalker #FacialAnimation #DynamicAvatars #FaceSynthesis #TalkingFaces #DiffusionModel #ComputerGraphics #DeepLearning #ComputerVision #Innovation
  • 2 replies
Ā·
posted an update 22 days ago
view post
Post
1043
šŸ˜€šŸ˜²šŸ˜šŸ˜” New Research Alert - FER-YOLO-Mamba (Facial Expressions Recognition Collection)! šŸ˜”šŸ˜„šŸ„“šŸ˜±
šŸ“„ Title: FER-YOLO-Mamba: Facial Expression Detection and Classification Based on Selective State Space šŸ”

šŸ“ Description: FER-YOLO-Mamba is a novel facial expression recognition model that combines the strengths of YOLO and Mamba technologies to efficiently recognize and localize facial expressions.

šŸ‘„ Authors: Hui Ma, Sen Lei, Turgay Celik, and Heng-Chao Li

šŸ”— Paper: FER-YOLO-Mamba: Facial Expression Detection and Classification Based on Selective State Space (2405.01828)

šŸ“ Repository: https://github.com/SwjtuMa/FER-YOLO-Mamba

šŸš€ Added to the Facial Expressions Recognition Collection: DmitryRyumin/facial-expressions-recognition-65f22574e0724601636ddaf7

šŸ”„šŸ” See also Facial_Expression_Recognition - ElenaRyumina/Facial_Expression_Recognition (App, co-authored by @DmitryRyumin ) šŸ˜‰

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #FERYOLOMamba #FER #YOLO #Mamba #FacialExpressionRecognition #EmotionRecognition #ComputerVision #DeepLearning #MachineLearning #Innovation
posted an update 23 days ago
view post
Post
1700
šŸ”„šŸš€šŸŒŸ New Research Alert - YOCO! šŸŒŸšŸš€šŸ”„
šŸ“„ Title: You Only Cache Once: Decoder-Decoder Architectures for Language Models šŸ”

šŸ“ Description: YOCO is a novel decoder-decoder architecture for LLMs that reduces memory requirements, speeds up prefilling, and maintains global attention. It consists of a self-decoder for encoding KV caches and a cross-decoder for reusing these caches via cross-attention.

šŸ‘„ Authors: Yutao Sun et al.

šŸ“„ Paper: You Only Cache Once: Decoder-Decoder Architectures for Language Models (2405.05254)

šŸ“ Repository: https://github.com/microsoft/unilm/tree/master/YOCO

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #YOCO #DecoderDecoder #LargeLanguageModels #EfficientArchitecture #GPUMemoryReduction #PrefillingSpeedup #GlobalAttention #DeepLearning #Innovation #AI
  • 2 replies
Ā·
posted an update 24 days ago
view post
Post
1932
šŸ”„šŸš€šŸŒŸ New Research Alert - xLSTM! šŸŒŸšŸš€šŸ”„
šŸ“„ Title: xLSTM: Extended Long Short-Term Memory šŸ”

šŸ“ Description: xLSTM is a scaled-up LSTM architecture with exponential gating and modified memory structures to mitigate known limitations. xLSTM blocks outperform SOTA transformers and state-space models in performance and scaling.

Eagerly awaiting the code release! šŸ•’ļø

šŸ‘„ Authors: Maximilian Beck et al.

šŸ“„ Paper: xLSTM: Extended Long Short-Term Memory (2405.04517)

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #xLSTM #DeepLearning #Innovation #AI
  • 1 reply
Ā·
posted an update 27 days ago
view post
Post
2516
šŸš€šŸŽ­šŸŒŸ New Research Alert - SIGGRAPH 2024 (Avatars Collection)! šŸŒŸšŸŽ­šŸš€
šŸ“„ Title: 3D Gaussian Blendshapes for Head Avatar Animation šŸ”

šŸ“ Description: 3D Gaussian Blendshapes for Head Avatar Animation is a novel method for modeling and animating photorealistic head avatars from monocular video input.

šŸ‘„ Authors: Shengjie Ma, Yanlin Weng, Tianjia Shao, and Kun Zhou

šŸ“… Conference: SIGGRAPH, 28 Jul ā€“ 1 Aug, 2024 | Denver CO, USA šŸ‡ŗšŸ‡ø

šŸ“„ Paper: 3D Gaussian Blendshapes for Head Avatar Animation (2404.19398)

šŸŒ Github Page: https://gapszju.github.io/GaussianBlendshape/

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #3DAnimation #HeadAvatar #GaussianBlendshapes #FacialAnimation #RealTimeRendering #SIGGRAPH2024 #ComputerGraphics #DeepLearning #ComputerVision #Innovation
replied to their post about 1 month ago
view reply

The authors plan to release the Dataset itself in June and the code in July, apparently during CVPR.

posted an update about 1 month ago
view post
Post
1334
šŸš€šŸŽ­šŸŒŸ New Research Alert - CVPR 2024 (Avatars Collection)! šŸŒŸšŸŽ­šŸš€
šŸ“„ Title: EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars šŸ”

šŸ“ Description: EMOPortraits is an enhanced multimodal one-shot head avatar model that achieves SOTA performance in emotion transfer and audio-driven facial animation tasks by improving the training pipeline and architecture to better handle intense and asymmetric facial expressions, while also proposing a novel multiview video dataset containing a wide range of such expressions.

šŸ‘„ Authors: Nikita Drobyshev, Antoni Bigata Casademunt, Konstantinos Vougioukas, Zoe Landgraf, Stavros Petridis, and Maja Pantic

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ“„ Paper: EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars (2404.19110)

šŸŒ GitHub Page: https://neeek2303.github.io/EMOPortraits

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #EMOPortraits #EmotionalTransfer #FacialAnimation #HeadAvatar #MultimodalLearning #OneShotLearning #AsymmetricFacialExpressions #IntenseFacialExpressions #NovelDataset #CVPR2024 #DeepLearning #ComputerVision #Innovation
  • 3 replies
Ā·
posted an update about 1 month ago
view post
Post
1476
šŸš€šŸŽ­šŸ”„ New Research Alert (Avatars Collection)! šŸ”„šŸŽ­šŸš€
šŸ“„ Title: ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving šŸ”

šŸ“ Description: ConsistentID is a novel portrait generation method that preserves the fine-grained identity of a single reference image.

šŸ‘„ Authors: Jiehui Huang et al.

šŸ”— Paper: ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving (2404.16771)

šŸŒ Github Page: https://ssugarwh.github.io/consistentid.github.io/
šŸ“ Repository: https://github.com/JackAILab/ConsistentID

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #ConsistentID #PortraitGeneration #IdentityPreservation #FineGrainedControl #ImageSynthesis #GenerativeModels #ComputerVision #DeepLearning
posted an update about 1 month ago
view post
Post
2151
šŸš€šŸ•ŗšŸŒŸ New Research Alert - CVPR 2024 (Avatars Collection)! šŸŒŸšŸ’ƒšŸš€
šŸ“„ Title: WANDR: Intention-guided Human Motion Generation šŸ”

šŸ“ Description: WANDR is a conditional Variational AutoEncoder (c-VAE) that generates realistic motion of human avatars that navigate towards an arbitrary goal location and reach for it.

šŸ‘„ Authors: Markos Diomataris, Nikos Athanasiou, Omid Taheri, Xi Wang, Otmar Hilliges, Michael J. Black

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ“„ Paper: WANDR: Intention-guided Human Motion Generation (2404.15383)

šŸŒ Web Page: https://wandr.is.tue.mpg.de
šŸ“ Repository: https://github.com/markos-diomataris/wandr

šŸ“ŗ Video: https://www.youtube.com/watch?v=9szizM-XUCg

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #WANDR #HumanMotionGeneration #MotionSynthesis #3DAvatar #GoalOrientedMovement #IntentionGuided #ConditionalVAE #CVPR2024 #DeepLearning #Innovation
posted an update about 1 month ago
view post
Post
1901
šŸš€šŸŽ­šŸ”„ New Research Alert (Avatars Collection)! šŸ”„šŸ‘„šŸš€
šŸ“„ Title: Learn2Talk: 3D Talking Face Learns from 2D Talking Face

šŸ“ Description: Learn2Talk is a framework that leverages expertise from 2D talking face methods to improve 3D talking face synthesis, focusing on lip synchronization and speech perception.

šŸ‘„ Authors: Yixiang Zhuang et al.

šŸ”— Paper: Learn2Talk: 3D Talking Face Learns from 2D Talking Face (2404.12888)

šŸŒ Github Page: https://lkjkjoiuiu.github.io/Learn2Talk/

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #Learn2Talk #3DTalkingFace #SpeechDrivenFacialAnimation #LipSync #SpeechPerception #ComputerVision #ImageProcessing #DeepLearning
posted an update about 1 month ago
view post
Post
1774
šŸ˜€šŸ¤“šŸ˜Ž New Research Alert - NAACL 2024 (Big Five Personality Traits Collection)! šŸ˜ŽšŸ˜‰šŸ˜¤
šŸ“„ Title: PersonaLLM: Investigating the Ability of Large Language Models to Express Personality Traits šŸ’¬

šŸ“ Description: This research examines the ability of LLMs to express personality traits and finds that LLMs can generate content consistent with assigned personality profiles and that humans can recognize certain traits with up to 80% accuracy. However, accuracy drops significantly when annotators are aware that the content was generated by an AI.

šŸ‘„ Authors: Hang Jiang et al.

šŸ“… Conference: NAACL, June 16ā€“21, 2024 | Mexico City, Mexico šŸ‡²šŸ‡½

šŸ”— Paper: PersonaLLM: Investigating the Ability of Large Language Models to Express Personality Traits (2305.02547)

šŸ“ Repository: https://github.com/hjian42/PersonaLLM

šŸš€ Added to the Big Five Personality Traits Collection: DmitryRyumin/big-five-personality-traits-661fb545292ab3d12a5a4890

šŸ”„šŸ” See also OCEAN-AI - ElenaRyumina/OCEANAI (App, co-authored by @DmitryRyumin ) šŸ˜‰

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #PersonaLLM #OCEANAI #BigFive #PersonalityTraits #PersonalityAnalysis #Chatbots #LLMs #NAACL2024 #DeepLearning #Innovation
posted an update about 1 month ago
view post
Post
3047
šŸš€šŸ’‡ā€ā™‚ļøšŸ”„ New Research Alert (Avatars Collection)! šŸ”„šŸ’‡ā€ā™€ļøšŸš€
šŸ“„ Title: HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach

šŸ“ Description: HairFastGAN is a fast, encoder-based approach to realistic and robust hair transfer that operates in the FS latent space of StyleGAN and includes enhanced in-painting and improved encoders for better alignment, color transfer, and post-processing.

šŸ‘„ Authors: Maxim Nikolaev, Mikhail Kuznetsov, Dmitry Vetrov, and Aibek Alanov

šŸ”— Paper: HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach (2404.01094)

šŸ“ Repository: https://github.com/AIRI-Institute/HairFastGAN

šŸ¤— Demo: multimodalart/hairfastgan
šŸ”„ Model šŸ¤–: AIRI-Institute/HairFastGAN

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #HairFastGAN #StyleGAN #VirtualTryOn #HairTransfer #AIHairStyling #GenerativeModels #ComputerVision #ImageProcessing #DeepLearning
posted an update about 1 month ago
view post
Post
2287
šŸš€šŸ‘©ā€šŸŽ¤šŸŒŸ New Research Alert - CVPR 2024! šŸŒŸšŸ‘©ā€šŸŽ¤šŸš€
šŸ“„ Title: Generalizable Face Landmarking Guided by Conditional Face Warping

šŸ“ Description: A new method is proposed to learn a generalizable face landmark that can handle different facial styles, using labeled real faces and unlabeled stylized faces.

šŸ‘„ Authors: Jiayi Liang, Haotian Liu, Hongteng Xu, Dixin Luo

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: Generalizable Face Landmarking Guided by Conditional Face Warping (2404.12322)

šŸŒ Github Page: https://plustwo0.github.io/project-face-landmarker/
šŸ“ Repository: https://github.com/plustwo0/generalized-face-landmarker

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #FaceLandmarking #DomainAdaptation #FaceWarpping #CVPR2024 #DeepLearning #Innovation
replied to Jaward's post about 1 month ago
view reply

Thank you so much! I'm planning a series of posts on Personality Traits and the Big Five. All posts will be related to the collection you mentioned. Also, together with @ElenaRyumina , we are planning to expand the App (OCEANAI) and are preparing a publication. I hope to be able to share in the future after the accepted paper to the A-Level Conference.

replied to Jaward's post about 1 month ago
posted an update about 2 months ago
view post
Post
2927
šŸ˜€šŸ¤“šŸ˜Ž New Research Alert - LREC-COLING 2024 (Big Five Personality Traits Collection)! šŸ˜ŽšŸ˜‰šŸ˜¤
šŸ“„ Title: PSYDIAL: Personality-based Synthetic Dialogue Generation using Large Language Models

šŸ“ Description: The PSYDIAL presents a novel pipeline for generating personality-based synthetic dialog data to elicit more human-like responses from language models, and presents a Korean dialog dataset focused on personality-based dialog.

šŸ‘„ Authors: Ji-Eun Han et al.

šŸ“… Conference: LREC-COLING, May 20-25, 2024 | Torino, Italia šŸ‡®šŸ‡¹

šŸ”— Paper: PSYDIAL: Personality-based Synthetic Dialogue Generation using Large Language Models (2404.00930)

šŸš€ Added to the Big Five Personality Traits Collection: DmitryRyumin/big-five-personality-traits-661fb545292ab3d12a5a4890

šŸ”„šŸ” See also OCEAN-AI - ElenaRyumina/OCEANAI (App, co-authored by @DmitryRyumin ) šŸ˜‰

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #PSYDIAL #PersonalityDialogues #SyntheticData #LanguageModels #ConversationalAI #KoreanDialogues #BigFivePersonality #ExtraveresionDialogues #OCEANAI #BigFive #PersonalityTraits #PersonalityAnalysis #LREC-COLING2024 #DeepLearning #Innovation
posted an update about 2 months ago
view post
Post
2444
šŸ˜€šŸ¤“šŸ˜Ž New Space - OCEAN-AI (App, co-authored by @DmitryRyumin ) šŸ˜ŽšŸ˜‰šŸ˜¤
šŸš€ Title: OCEAN-AI is an open-source app for Big Five personality traits assessment and HR-processes automatization.

šŸ¤— Demo: ElenaRyumina/OCEANAI

šŸ‘„ Authors: @ElenaRyumina , @DmitryRyumin , and Alexey Karpov

šŸ“ Description: OCEAN-AI consists of a set of modules for intellectual analysis of human behavior based on multimodal data for automatic personality traits (PT) assessment. The app evaluates five PT: Openness to experience, Conscientiousness, Extraversion, Agreeableness, Non-Neuroticism.

The App solves practical tasks:
- Ranking of potential candidates by professional responsibilities.
- Forming effective work teams.
- Predicting consumer preferences for industrial goods.

šŸ” Keywords: #OCEANAI #BigFive #PersonalityTraits #PersonalityAnalysis #MultimodalData #Transformers #FirstImpressionsV2 #DeepLearning #Innovation #BehaviorAnalysis #AffectiveRecognition #TeamFormation #ConsumerPreferences #CandidateRanking
  • 1 reply
Ā·
posted an update about 2 months ago
view post
Post
2290
šŸ•ŗšŸŽ¬šŸ”„ New Research Alert - CVPR 2024 (Avatars Collection)! šŸ”„šŸ¤–āš”
šŸ“„ Title: GoMAvatar: Efficient Animatable Human Modeling from Monocular Video Using Gaussians-on-Mesh

šŸ“ Description: GoMAvatar is an efficient method for real-time, high-quality, animatable human modeling from a single monocular video. It combines the rendering quality of Gaussian splatting with the geometry modeling capabilities of deformable meshes, enabling realistic digital avatars that can be rearticulated in new poses and rendered from novel angles, while seamlessly integrating with graphics pipelines.

šŸ‘„ Authors: Jing Wen, Xiaoming Zhao, Zhongzheng Ren, Alexander G. Schwing, Shenlong Wang

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: GoMAvatar: Efficient Animatable Human Modeling from Monocular Video Using Gaussians-on-Mesh (2404.07991)

šŸŒ Github Page: https://wenj.github.io/GoMAvatar/
šŸ“ Repository: https://github.com/wenj/GoMAvatar

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #GoMAvatar #3DAvatar #3DAnimation #AnimatableAvatars #MonocularVideo #RealTimeRendering #HumanModeling #CVPR2024 #DeepLearning #Innovation
posted an update about 2 months ago
view post
Post
2468
šŸš€šŸ•ŗšŸŒŸ New Research Alert (Avatars Collection)! šŸŒŸšŸ’ƒšŸš€
šŸ“„ Title: PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations šŸ”

šŸ“ Description: PhysAvatar is a novel framework that uses inverse rendering and physics to autonomously reconstruct the shape, appearance, and physical properties of clothed human avatars from multi-view video data.

šŸ‘„ Authors: Yang Zheng et al.

šŸ”— Paper: PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations (2404.04421)

šŸŒ GitHub Page: https://qingqing-zhao.github.io/PhysAvatar

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #PhysAvatar #DigitalHumans #InverseRendering #PhysicsSimulation #AvatarModeling #ClothSimulation #PhotorealisticRendering #ComputerVision #DeepLearning #Innovation
posted an update about 2 months ago
view post
Post
2240
šŸš€šŸ’ƒšŸŒŸ New Research Alert (Avatars Collection)! šŸŒŸšŸ•ŗšŸš€
šŸ“„ Title: InstructHumans: Editing Animated 3D Human Textures with Instructions

šŸ“ Description: InstructHumans is a novel framework for text-instructed editing of 3D human textures that employs a modified Score Distillation Sampling (SDS-E) method along with spatial smoothness regularization and gradient-based viewpoint sampling to achieve high-quality, consistent, and instruction-true edits.

šŸ‘„ Authors: Jiayin Zhu, Linlin Yang, Angela Yao

šŸ”— Paper: InstructHumans: Editing Animated 3D Human Textures with Instructions (2404.04037)

šŸŒ Web Page: https://jyzhu.top/instruct-humans
šŸ“ Repository: https://github.com/viridityzhu/InstructHumans

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #InstructHumans #3DTextureEditing #TextInstructions #ScoreDistillationSampling #SDS-E #SpatialSmoothnessRegularization #3DEditing #AvatarEditing #DeepLearning #Innovation
posted an update about 2 months ago
view post
Post
2532
šŸš€šŸ•ŗšŸŒŸ New Research Alert - CVPR 2024 (Avatars Collection)! šŸŒŸšŸ’ƒšŸš€
šŸ“„ Title: 3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting šŸ”

šŸ“ Description: 3DGS-Avatar is a novel method for creating animatable human avatars from monocular videos using 3D Gaussian Splatting (3DGS). By using a non-rigid deformation network and as-isometric-as-possible regularizations, the method achieves comparable or better performance than SOTA methods while being 400x faster in training and 250x faster in inference, allowing real-time rendering at 50+ FPS.

šŸ‘„ Authors: Zhiyin Qian, Shaofei Wang, Marko Mihajlovic, Andreas Geiger, Siyu Tang

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: 3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting (2312.09228)

šŸŒ Github Page: https://neuralbodies.github.io/3DGS-Avatar/
šŸ“ Repository: https://github.com/mikeqzy/3dgs-avatar-release

šŸ“ŗ Video: https://www.youtube.com/watch?v=FJ29U9OkmmU

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #3DGSAvatar #3DAvatar #3DGaussianSplatting #AnimatableAvatars #MonocularVideo #RealTimeRendering #FastTraining #EfficientInference #CVPR2024 #DeepLearning #Innovation
posted an update about 2 months ago
view post
Post
2270
šŸš€šŸŽ­šŸŒŸ New Research Alert - CVPR 2024 (Avatars Collection)! šŸŒŸ šŸŽ­šŸš€
šŸ“„ Title: GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image šŸ”

šŸ“ Description: GeneAvatar is a generic approach for editing 3D head avatars based on a single 2D image, applicable to different volumetric representations. The novel expression-aware generative modification model delivers high quality and consistent editing results across multiple viewpoints and emotions.

šŸ‘„ Authors: Chong Bao et al.

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image (2404.02152)

šŸŒ Github Page: https://zju3dv.github.io/geneavatar/
šŸ“ Repository: https://github.com/zju3dv/GeneAvatar

šŸ“ŗ Video: https://www.youtube.com/watch?v=4zfbfPivtVU

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #GeneAvatar #HeadAvatar #3DHeadAvatarEditing #VolumetricHeadAvatar #SingleImageEditing #ExpressionAwareModification #CVPR2024 #DeepLearning #Innovation
posted an update about 2 months ago
view post
Post
1786
šŸš€šŸŽ­šŸŒŸ New Research Alert - CVPR 2024 (Avatars Collection)! šŸŒŸ šŸŽ­šŸš€
šŸ“„ Title: MonoAvatar++: Efficient 3D Implicit Head Avatar with Mesh-anchored Hash Table Blendshapes šŸ”

šŸ“ Description: MonoAvatar++ is a real-time neural implicit 3D head avatar model with high quality and fine-grained control over facial expressions. It uses local hash table blendshapes attached to a parametric facial model for efficient rendering, achieving SOTA results even for challenging expressions.

šŸ‘„ Authors: Ziqian Bai, Feitong Tan, Sean Fanello, Rohit Pandey, Mingsong Dou, Shichen Liu, Ping Tan, Yinda Zhang

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: Efficient 3D Implicit Head Avatar with Mesh-anchored Hash Table Blendshapes (2404.01543)

šŸŒ Github Page: https://augmentedperception.github.io/monoavatar-plus

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #MonoAvatar++ #HeadAvatar #3DModeling #AvatarGeneration #NeuralImplicitAvatar #EfficientRendering #CVPR2024 #DeepLearning #Innovation
replied to their post about 2 months ago
view reply

Hi @researcher171473 ,

The idea of using GANs or latent diffusion models to augment visual data instead of image mixing is indeed interesting. However, I have a few considerations:

  1. Training GANs and diffusion models is typically more resource intensive than simple image mixing.
  2. Ensuring that the generated examples are sufficiently informative and diverse to improve the classifier may require additional mechanisms (diversity regularization, adversarial training, latent space manipulation, domain-specific constraints, etc.).
  3. The generated examples must retain their original semantics and class membership to effectively complement the training data.
  4. The classifier may overfit the generated examples and lose performance on real data.

Despite these potential challenges, combining image blending with generative models could potentially yield better results. For example, GANs could be used to generate additional realistic samples that can then be mixed to increase diversity.

posted an update 2 months ago
view post
Post
1411
šŸŽÆšŸ–¼ļøšŸŒŸ New Research Alert - ICLR 2024! šŸŒŸ šŸ–¼ļøšŸŽÆ
šŸ“„ Title: Adversarial AutoMixup šŸ–¼ļø

šŸ“ Description: Adversarial AutoMixup is an approach to image classification augmentation. By alternately optimizing a classifier and a mixed-sample generator, it attempts to generate challenging samples and improve the robustness of the classifier against overfitting.

šŸ‘„ Authors: Huafeng Qin et al.

šŸ“… Conference: ICLR, May 7-11, 2024 | Vienna, Austria šŸ‡¦šŸ‡¹

šŸ”— Paper: Adversarial AutoMixup (2312.11954)

šŸ“ Repository: https://github.com/JinXins/Adversarial-AutoMixup

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #AutoMixup #ImageClassification #ImageAugmentation #AdversarialLearning #ICLR2024 #DeepLearning #Innovation
Ā·
posted an update 2 months ago
view post
Post
2132
ā˜ļøā˜” New Research Alert! ā„ļøšŸŒ™
šŸ“„ Title: CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning

šŸ“ Description: CoDA is a UDA methodology that boosts models to understand all adverse scenes (ā˜ļø,ā˜”,ā„ļø,šŸŒ™) by highlighting the discrepancies within these scenes. CoDA achieves state-of-the-art performances on widely used benchmarks.

šŸ‘„ Authors: Ziyang Gong, Fuhao Li, Yupeng Deng, Deblina Bhattacharjee, Xiangwei Zhu, Zhenming Ji

šŸ”— Paper: CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning (2403.17369)

šŸ“ Repository: https://github.com/Cuzyoung/CoDA

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #CoDA #DomainAdaptation #VisualPromptTuning #SAVPT #DeepLearning #Innovation
posted an update 2 months ago
view post
Post
1732
šŸš€šŸŽ­šŸŒŸ New Research Alert! šŸŒŸ šŸŽ­šŸš€
šŸ“„ Title: AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation šŸ”

šŸ“ Description: AniPortrait is a novel framework for generating photorealistic portrait animations driven by audio and a reference image, with superior facial naturalness, pose variety, and visual quality, with potential applications in facial motion editing and facial reenactment.

šŸ‘„ Authors: Huawei Wei, @ZJYang , Zhisheng Wang

šŸ”— Paper: AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation (2403.17694)

šŸ“ Repository: https://github.com/Zejun-Yang/AniPortrait

šŸ¤— Demo: ZJYang/AniPortrait_official
šŸ”„ Model šŸ¤–: ZJYang/AniPortrait

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #AniPortrait #Animation #AudioDriven #Photorealistic #FacialAnimation #DeepLearning #Innovation
  • 2 replies
Ā·
posted an update 2 months ago
view post
Post
1583
šŸš€šŸŽ­šŸŒŸ New Research Alert! šŸŒŸ šŸŽ­šŸš€
šŸ“„ Title: FlashFace: Human Image Personalization with High-fidelity Identity Preservation šŸ”

šŸ“ Description: FlashFace is a personalized photo editing tool that focuses on high-fidelity identity preservation and improved compliance through advanced encoding and integration strategies.

šŸ‘„ Authors: Shilong Zhang, Lianghua Huang, @xichenhku et al.

šŸ”— Paper: FlashFace: Human Image Personalization with High-fidelity Identity Preservation (2403.17008)

šŸŒ Github Page: https://jshilong.github.io/flashface-page

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #FlashFace #Personalization #HighFidelityIdentity #DeepLearning #Innovation
posted an update 2 months ago
view post
Post
2658
šŸš€šŸ’ƒšŸŒŸ New Research Alert - ICASSP 2024! šŸŒŸ šŸ•ŗšŸš€
šŸ“„ Title: Text2Avatar: Text to 3D Human Avatar Generation with Codebook-Driven Body Controllable Attribute šŸŒŸ

šŸ“ Description: Text2Avatar is a novel approach that can generate realistic 3D human avatars directly from textual descriptions, enabling multi-attribute control and realistic styling, overcoming the challenges of feature coupling and data scarcity in this domain.

šŸ‘„ Authors: Chaoqun Gong et al.

šŸ“… Conference: ICASSP, 14-19 April 2024 | Seoul, Korea šŸ‡°šŸ‡·

šŸ”— Paper: Text2Avatar: Text to 3D Human Avatar Generation with Codebook-Driven Body Controllable Attribute (2401.00711)

šŸŒ Github Page: https://iecqgong.github.io/text2avatar/

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ“ Added to the ICASSP-2023-24-Papers: https://github.com/DmitryRyumin/ICASSP-2023-24-Papers

šŸ” Keywords: #AvatarGeneration #Text2Avatar #ICASSP2024 #DeepLearning #Innovation
  • 1 reply
Ā·
posted an update 2 months ago
view post
Post
1816
šŸš€šŸŽ­šŸŒŸ New Research Alert - CVPR 2024! šŸŒŸ šŸŽ­šŸš€
šŸ“„ Title: GaussianAvatars: Photorealistic Head Avatars with Rigged 3D Gaussians šŸ”

šŸ“ Description: GaussianAvatars proposes a novel method for creating photorealistic and fully controllable head avatars by combining a parametric morphable face model with a dynamic 3D representation based on rigged 3D Gaussian splats, enabling high-quality rendering and precise animation control.

šŸ‘„ Authors: Shenhan Qian, Tobias Kirschstein, Liam Schoneveld, Davide Davoli, Simon Giebenhain, Matthias NieƟner

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: GaussianAvatars: Photorealistic Head Avatars with Rigged 3D Gaussians (2312.02069)

šŸŒ Github Page: https://shenhanqian.github.io/gaussian-avatars
šŸ“ Repository: https://github.com/ShenhanQian/GaussianAvatars

šŸ“ŗ Video: https://www.youtube.com/watch?v=lVEY78RwU_I

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #HeadAvatar #GaussianAvatars #DynamicGaussians #3DModeling #AvatarGeneration #CVPR2024 #DeepLearning #Innovation
posted an update 2 months ago
view post
Post
1369
šŸš€šŸŽ­šŸŒŸ New Research Alert - CVPR 2024! šŸŒŸ šŸŽ­šŸš€
šŸ“„ Title: Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians šŸ”

šŸ“ Description: Gaussian Head Avatar is a method for generating highly detailed 3D head avatars using dynamic Gaussian functions controlled by a neural network, ensuring ultra-high quality visualization even under limited viewpoints.

šŸ‘„ Authors: Yuelang Xu, @ben55 , Zhe Li, @HongwenZhang , @wanglz14 , Zerong Zheng, and @YebinLiu

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians (2312.03029)

šŸŒ Github Page: https://yuelangx.github.io/gaussianheadavatar
šŸ“ Repository: https://github.com/YuelangX/Gaussian-Head-Avatar

šŸ“ŗ Video: https://www.youtube.com/watch?v=kvrrI3EoM5g

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #HeadAvatar #DynamicGaussians #3DModeling #AvatarGeneration #CVPR2024 #DeepLearning #Innovation
posted an update 2 months ago
view post
Post
1853
šŸš€šŸŽ­šŸŒŸ New Research Alert - ICLR 2024! šŸŒŸ šŸŽ­šŸš€
šŸ“„ Title: InstructPix2NeRF: Instructed 3D Portrait Editing from a Single Image šŸŒŸšŸš€

šŸ“ Description: InstructPix2NeRF is a novel approach to instructed 3D portrait editing from a single image, using a conditional latent 3D diffusion process and a token position randomization strategy to enable multi-semantic editing while preserving the identity of the portrait.

šŸ‘„ Authors: Jianhui Li et al.

šŸ“… Conference: ICLR, May 7-11, 2024 | Vienna, Austria šŸ‡¦šŸ‡¹

šŸ”— Paper: InstructPix2NeRF: Instructed 3D Portrait Editing from a Single Image (2311.02826)

šŸŒ Github Page: https://mybabyyh.github.io/InstructPix2NeRF
šŸ“ Repository: https://github.com/mybabyyh/InstructPix2NeRF

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #InstructPix2NeRF #AvatarCustomization #3DPortrait #DiffusionProcess #IdentityConsistency #ICLR2024 #DeepLearning #Innovation
posted an update 3 months ago
view post
Post
1920
šŸš€šŸ•ŗšŸŒŸ New Research Alert - CVPR 2024! šŸŒŸ šŸ’ƒšŸ»šŸš€
šŸ“„ Title: NECA: Neural Customizable Human Avatar šŸŒŸšŸš€

šŸ“ Description: The NECA paper presents a novel method for creating customizable human avatars from video, allowing detailed manipulation of pose, shadow, shape, lighting, and texture for realistic rendering and editing.

šŸ‘„ Authors: Junjin Xiao, Qing Zhang, Zhan Xu, and Wei-Shi Zheng

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: NECA: Neural Customizable Human Avatar (2403.10335)

šŸ“ Repository: https://github.com/iSEE-Laboratory/NECA

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #NECA #AvatarCustomization #RealisticRendering #HumanRepresentation #CVPR2024 #DeepLearning #Animation #Innovation
posted an update 3 months ago
view post
Post
šŸš€šŸ’ƒšŸ»šŸŒŸ New Research Alert - CVPR 2024! šŸŒŸšŸ•ŗ šŸš€
šŸ“„ Title: Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling šŸŒŸšŸš€

šŸ“ Description: Animatable Gaussians - a novel method for creating lifelike human avatars from RGB videos, utilizing 2D CNNs and 3D Gaussian splatting to capture pose-dependent garment details and dynamic appearances with high fidelity.

šŸ‘„ Authors: Zhe Li, Zerong Zheng, Lizhen Wang, and Yebin Liu

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling (2311.16096)

šŸŒ Github Page: https://animatable-gaussians.github.io
šŸ“ Repository: https://github.com/lizhe00/AnimatableGaussians

šŸ“ŗ Video: https://www.youtube.com/watch?v=kOmZxD0HxZI

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #AnimatableGaussians #HumanAvatars #3DGaussianSplatting #CVPR2024 #DeepLearning #Animation #Innovation
posted an update 3 months ago
view post
Post
šŸš€šŸŽ­šŸŒŸ New Research Alert! šŸŒŸšŸŽ­ šŸš€
šŸ“„ Title: VLOGGER: Multimodal Diffusion for Embodied Avatar Synthesis šŸŒŸšŸš€

šŸ“ Description: VLOGGER is a method for text- and audio-driven generation of talking human video from a single input image of a person, building on the success of recent generative diffusion models.

šŸ‘„ Authors: @enriccorona , @Andreiz , @kolotouros , @thiemoall , and et al.

šŸ”— Paper: VLOGGER: Multimodal Diffusion for Embodied Avatar Synthesis (2403.08764)

šŸŒ Github Page: https://enriccorona.github.io/vlogger/

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #VLOGGER #EmbodiedAvatarSynthesis #MultimodalDiffusion #GenerativeDiffusionModels #DeepLearning #Animation #Innovation
posted an update 3 months ago
view post
Post
šŸš€šŸ—£ļøšŸŒŸ New Research Alert - ICASSP 2024! šŸŒŸšŸ—£ļøšŸš€
šŸ“„ Title: AV2Wav: Diffusion-Based Re-synthesis from Continuous Self-supervised Features for Audio-Visual Speech Enhancement šŸŒŸšŸš€

šŸ“ Description: Diffused Resynthesis and HuBERT Speech Quality Enhancement.

šŸ‘„ Authors: Ju-Chieh Chou, Chung-Ming Chien, Karen Livescu

šŸ“… Conference: ICASSP, 14-19 April 2024 | Seoul, Korea šŸ‡°šŸ‡·

šŸ”— Paper: AV2Wav: Diffusion-Based Re-synthesis from Continuous Self-supervised Features for Audio-Visual Speech Enhancement (2309.08030)

šŸŒ Web Page: https://home.ttic.edu/~jcchou/demo/avse/avse_demo.html

šŸ“š More Papers: more cutting-edge research presented at other conferences in the
DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Speech Enhancement Collection: DmitryRyumin/speech-enhancement-65de31e1b6d9a040c151702e

šŸ” Keywords: #AV2Wav #SpeechEnhancement #SpeechProcessing #AudioVisual #Diffusion #ICASSP2024 #Innovation
posted an update 3 months ago
view post
Post
šŸš€šŸ•ŗšŸŒŸ New Research Alert - AAAI 2024! šŸŒŸšŸ’ƒšŸš€
šŸ“„ Title: Relightable and Animatable Neural Avatars from Videos šŸŒŸšŸš€

šŸ“ Description: Relightable & animatable neural avatars from sparse videos.

šŸ‘„ Authors: Wenbin Lin, Chengwei Zheng, Jun-Hai Yong, and Feng Xu

šŸ“… Conference: AAAI, February 20-27, 2024 | Vancouver, Canada šŸ‡ØšŸ‡¦

šŸ”— Paper: Relightable and Animatable Neural Avatars from Videos (2312.12877)

šŸŒ Github Page: https://wenbin-lin.github.io/RelightableAvatar-page
šŸ“ Repository: https://github.com/wenbin-lin/RelightableAvatar

šŸ“ŗ Video: https://www.youtube.com/watch?v=v9rlys0xQGo

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ“š Added to the AAAI 2024 Papers: https://github.com/DmitryRyumin/AAAI-2024-Papers

šŸ” Keywords: #NeuralAvatar #RelightableAvatars #AnimatableAvatars #3DModeling #PhotorealisticRendering #ShadowModeling #DigitalAvatars #GeometryModeling #AAAI2024 #DeepLearning #Animation #Innovation
posted an update 3 months ago
view post
Post
šŸš€šŸ–¼ļøšŸŒŸ New Research Alert - CVPR 2024! šŸŒŸšŸ–¼ļøšŸš€
šŸ“„ Title: CAMixerSR: Only Details Need More "Attention" šŸŒŸšŸš€

šŸ“ Description: CAMixerSR is a new approach integrating content-aware accelerating framework and token mixer design, to pursue more efficient SR inference via assigning convolution for simple regions but window-attention for complex textures. It exhibits excellent generality and attains competitive results among state-of-the-art models with better complexity-performance trade-offs on large-image SR, lightweight SR, and omnidirectional-image SR.

šŸ‘„ Authors: Yan Wang, Shijie Zhao, Yi Liu, Junlin Li, and Li Zhang

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: CAMixerSR: Only Details Need More "Attention" (2402.19289)

šŸ”— Repository: https://github.com/icandle/CAMixerSR

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Image Enhancement Collection: DmitryRyumin/image-enhancement-65ee1cd2fe1c0c877ae55d28

šŸ” Keywords: #CAMixerSR #SuperResolution #WindowAttention #ImageEnhancement #CVPR2024 #DeepLearning #Innovation
posted an update 3 months ago
view post
Post
šŸš€šŸŽ­šŸŒŸ New Research Alert - ICLR 2024! šŸŒŸšŸŽ­ šŸš€
šŸ“„ Title: GPAvatar: Generalizable and Precise Head Avatar from Image(s) šŸŒŸšŸš€

šŸ“ Description: GPAvatar's objective is to faithfully replicate head avatars while providing precise control over expressions and postures.

šŸ‘„ Authors: Xuangeng Chu et al.

šŸ“… Conference: ICLR, May 7-11, 2024 | Vienna, Austria šŸ‡¦šŸ‡¹

šŸ”— Paper: GPAvatar: Generalizable and Precise Head Avatar from Image(s) (2401.10215)

šŸ”— Github Page: https://xg-chu.github.io/project_gpavatar
šŸ”— Repository: https://github.com/xg-chu/GPAvatar

šŸ”— Video: https://www.youtube.com/watch?v=7A3DMaB6Zk0

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #GPAvatar #MTA #Synthesis #LipSyncing #Expressions #HighResolutionVideos #ICLR2024 #DeepLearning #Animation #Innovation
  • 1 reply
Ā·
posted an update 3 months ago
view post
Post
šŸš€šŸŽ­šŸŒŸ New Research Alert - ICLR 2024! šŸŒŸšŸŽ­ šŸš€
šŸ“„ Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis šŸŒŸšŸš€

šŸ‘„ Authors: Zhenhui Ye et al.

šŸ“… Conference: ICLR, May 7-11, 2024 | Vienna, Austria šŸ‡¦šŸ‡¹

šŸ”— Paper: Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis (2401.08503)

šŸ”— Github Page: https://real3dportrait.github.io/
šŸ”— Repository: https://github.com/yerfor/Real3DPortrait

šŸ”„ Model šŸ¤–: ameerazam08/Real3DPortrait

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #Real3D-Potrait #I2P #HTB-SR #A2M #Synthesis #LipSyncing #HighResolutionVideos #ICLR2024 #DeepLearning #Animation #Innovation
posted an update 3 months ago
view post
Post
šŸš€šŸ˜ˆšŸŒŸ New Research Alert - CVPR 2024! šŸŒŸšŸ˜ˆ šŸš€
šŸ“„ Title: SyncTalk: The Devil šŸ˜ˆ is in the Synchronization for Talking Head Synthesis šŸŒŸšŸš€

šŸ“ Description: SyncTalk synthesizes synchronized talking head videos, employing tri-plane hash representations to maintain subject identity. It can generate synchronized lip movements, facial expressions, and stable head poses, and restores hair details to create high-resolution videos.

šŸ‘„ Authors: Ziqiao Peng et al.

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis (2311.17590)

šŸ”— Github Page: https://ziqiaopeng.github.io/synctalk
šŸ”— Repository: https://github.com/ZiqiaoPeng/SyncTalk

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #TalkingHeads #Synthesis #TriPlaneHash #FacialExpressions #LipSyncing #HighResolutionVideos #CVPR2024 #DeepLearning #Animation #Innovation
posted an update 3 months ago
view post
Post
šŸš€šŸŽ¬šŸŒŸ New Research Alert - CVPR 2024! šŸŒŸšŸŽ¬ šŸš€
šŸ“„ Title: GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians šŸŒŸšŸš€

šŸ‘„ Authors: Liangxiao Hu et al.

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ”— Paper: GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians (2312.02134)

šŸ”— Github Page: https://huliangxiao.github.io/GaussianAvatar
šŸ”— Repository: https://github.com/huliangxiao/GaussianAvatar

šŸ”— Video: https://www.youtube.com/watch?v=a4g8Z9nCF-k

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #GaussianAvatar #3DGaussians #HumanAvatarModeling #PoseDependentAppearance #DynamicAppearanceModeling #MotionEstimation #MonocularSettings #AppearanceQuality #RenderingEfficiency #CVPR2024 #DeepLearning #Animation #Innovation
posted an update 3 months ago
view post
Post
šŸš€šŸ’ƒšŸŒŸ New Research Alert - CVPR 2024! šŸŒŸšŸ•ŗ šŸš€
šŸ“„ Title: MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model šŸŒŸšŸš€

šŸ‘„ Authors: @junhao910323 , @hansyan et al.

šŸ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA šŸ‡ŗšŸ‡ø

šŸ¤— Demo: zcxu-eric/magicanimate

šŸ”— Paper: MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model (2311.16498)
šŸ”— Github Page: https://showlab.github.io/magicanimate/
šŸ”— Repository: https://github.com/magic-research/magic-animate

šŸ”„ Model šŸ¤–: zcxu-eric/MagicAnimate

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36

šŸ” Keywords: #MagicAnimate #DiffusionModel #HumanImageAnimation #CVPR2024 #Diffusion #DeepLearning #Innovation
posted an update 3 months ago
view post
Post
šŸŒŸšŸŽ­āœØ Exciting News! The Latest in Expressive Video Portrait Generation! šŸŒŸšŸŽ­āœØ

šŸ“„ Title: EMO: Emote Portrait Alive - Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions

šŸ‘„ Authors: Linrui Tian, @lucaskingjade , Bang Zhang, and @Liefeng

šŸ”— Paper: EMO: Emote Portrait Alive - Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions (2402.17485)
šŸ”— Github Page: https://humanaigc.github.io/emote-portrait-alive
šŸ”— Repository: https://github.com/HumanAIGC/EMO

šŸ” Keywords: #EMO #EmotePortrait #Audio2VideoDiffusion #ExpressiveAnimations #VideoGeneration #DigitalArt #HumanExpression #ComputerVision #DeepLearning #AI

šŸš€ Added to the Avatars Collection: DmitryRyumin/avatars-65df37cdf81fec13d4dbac36
posted an update 3 months ago
view post
Post
šŸš€šŸ”„šŸŒŸ New Research Alert - ICLR 2024! šŸŒŸšŸ”„šŸš€
šŸ“„ Title: FuseChat: Revolutionizing Chat Models Fusion šŸŒŸšŸš€

šŸ‘„ Authors: @Wanfq , @passerqxj et al.

šŸ“… Conference: ICLR, May 7-11, 2024 | Vienna, Austria šŸ‡¦šŸ‡¹

šŸ”— Paper: FuseChat: Knowledge Fusion of Chat Models (2402.16107)
šŸ”— Repository: https://github.com/fanqiwan/FuseLLM

šŸ”„ Models šŸ¤–:
1ļøāƒ£ FuseChat-7B-VaRM: FuseAI/FuseChat-7B-VaRM
2ļøāƒ£ FuseChat-7B-Slerp: FuseAI/FuseChat-7B-Slerp
3ļøāƒ£ OpenChat-3.5-7B-Solar: FuseAI/OpenChat-3.5-7B-Solar
4ļøāƒ£ FuseChat-7B-TA: FuseAI/FuseChat-7B-TA
5ļøāƒ£ OpenChat-3.5-7B-Mixtral: FuseAI/OpenChat-3.5-7B-Mixtral

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #FuseChat #ChatModels #KnowledgeFusion #ICLR2024 #AI #Innovation #FuseLLM
replied to their post 3 months ago
view reply

Got it, added links to models on the HF hub. Will keep that in mind for the future. šŸ˜Š

posted an update 3 months ago
view post
Post
šŸŒŸāœØ Exciting Announcement: NVIDIA AI Foundation Models āœØšŸŒŸ

šŸš€ Interact effortlessly with the latest SOTA AI model APIs, all optimized on the powerful NVIDIA accelerated computing stack-right from your browser! šŸ’»āš”

šŸ”— Web Page: https://catalog.ngc.nvidia.com/ai-foundation-models

šŸŒŸšŸŽÆ Favorites:

šŸ”¹ Code Generation:
1ļøāƒ£ Code Llama 70B šŸ“šŸ”„: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/codellama-70b
Model šŸ¤–: codellama/CodeLlama-70b-hf

šŸ”¹ Text and Code Generation:
1ļøāƒ£ Gemma 7B šŸ’¬šŸ’»: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/gemma-7b
Model šŸ¤–: google/gemma-7b
2ļøāƒ£ Yi-34B šŸ“ššŸ’”: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/yi-34b
Model šŸ¤–: 01-ai/Yi-34B

šŸ”¹ Text Generation:
1ļøāƒ£ Mamba-Chat šŸ’¬šŸ: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/mamba-chat
Model šŸ¤–: havenhq/mamba-chat
2ļøāƒ£ Llama 2 70B šŸ“šŸ¦™: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/llama2-70b
Model šŸ¤–: meta-llama/Llama-2-70b

šŸ”¹ Text-To-Text Translation:
1ļøāƒ£ SeamlessM4T V2 šŸŒšŸ”„: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/seamless-m4t2-t2tt
Model šŸ¤–: facebook/seamless-m4t-v2-large

šŸ”¹ Image Generation:
1ļøāƒ£ Stable Diffusion XL šŸŽØšŸ”: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/sdxl

šŸ”¹ Image Conversation:
1ļøāƒ£ NeVA-22B šŸ—ØļøšŸ“ø: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/neva-22b

šŸ”¹ Image Classification and Object Detection:
1ļøāƒ£ CLIP šŸ–¼ļøšŸ”: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/clip

šŸ”¹ Voice Conversion:
1ļøāƒ£ Maxine Voice Font šŸ—£ļøšŸŽ¶: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/voice-font

šŸ”¹ Multimodal LLM (MLLM):
1ļøāƒ£ Kosmos-2 šŸŒšŸ‘ļø: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/kosmos-2
  • 2 replies
Ā·
posted an update 3 months ago
view post
Post
šŸŽ‰āœØ Exciting Research Alert! YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information šŸš€

YOLOv9 is the latest breakthrough in object detection!

šŸ“„ Title: YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information

šŸ‘„ Authors: Chien-Yao Wang et al.
šŸ“… Published: ArXiv, February 2024

šŸ”— Paper: YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information (2402.13616)
šŸ”— Model šŸ¤–: adonaivera/yolov9
šŸ”— Repo: https://github.com/WongKinYiu/yolov9

šŸš€ Don't miss out on this cutting-edge research! Explore YOLOv9 today and stay ahead of the curve in the dynamic world of computer vision. šŸŒŸ

šŸ” Keywords: #YOLOv9 #ObjectDetection #DeepLearning #ComputerVision #Innovation #Research #ArtificialIntelligence
  • 1 reply
Ā·
posted an update 3 months ago
view post
Post
šŸš€šŸ”„šŸŒŸ New Research Alert - ICLR 2024! šŸŒŸšŸ”„šŸš€
šŸ“„ Title: FasterViT: Fast Vision Transformers with Hierarchical Attention

šŸ‘„ Authors: @ahatamiz , @slivorezzz et al.

šŸ“… Conference: ICLR, May 7-11, 2024 | Vienna, Austria šŸ‡¦šŸ‡¹

šŸ”— Paper: FasterViT: Fast Vision Transformers with Hierarchical Attention (2306.06189)

šŸ”— Model šŸ¤– : nvidia/FasterViT
šŸ”— Repo: https://github.com/NVlabs/FasterViT

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #VisionTransformers #DeepLearning #ComputerVision #ICLR2024 #MachineLearning #HierarchicalAttention #NeuralNetworks #Research #ArtificialIntelligence #Innovation
posted an update 3 months ago
view post
Post
šŸ“¢ New Research Alert - AAAI 2024!
šŸ“„ Title: CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion Recognition

šŸ‘„ Authors: Cheng Peng et al.

šŸ“… Conference: AAAI, February 20-27, 2024 | Vancouver, Canada šŸ‡ØšŸ‡¦

šŸ”— Paper: https://arxiv.org/abs/2312.10201

šŸ”— Repository: https://github.com/chengzju/CARAT

šŸ“š More Papers: Explore a collection of exciting papers presented at AAAI 2024 and other conferences in the repositories:
- AAAI 2024 Papers: https://github.com/DmitryRyumin/AAAI-2024-Papers, @DmitryRyumin
- Other Conferences: DmitryRyumin/NewEraAI-Papers

šŸ” Keywords: #EmotionRecognition #MultiModal #AI #Research #AAAI2024 #MachineLearning
posted an update 3 months ago
posted an update 4 months ago