Join our Facebook group
👉 Remote Jobs NetworkAugust 27
• Optimize machine learning models, ML Graph Conversion Stack & ML Inference Stack for deployment on edge devices. • Develop and maintain OnDevice ML Inference Framework for specialized and general purpose processors • Collaborate with software engineers, data scientists, and product managers to integrate ML solutions into products. • Implement techniques to ensure efficient inference and minimal resource consumption on target devices. • Conduct performance evaluations and continuous improvement of ML models & Inference pipeline post-deployment. • Stay updated with the latest advancements in on-device ML technologies and frameworks. • Troubleshoot and resolve issues related to ML model deployment and execution on devices.
• Strong programming skills in languages such as C++, C and Python • Must have experience with in-depth working and core implementation of machine learning frameworks such as TensorFlow Lite, PyTorch Mobile, ONNX or Core ML • Strong experience and proven track record with intrinsic level (SIMD, NEON, AVX) implementation for optimizing compute and memory algorithms • Proven track record of deploying ML models on edge devices and optimizing them for performance and memory • Familiarity with performance profiling tools and techniques for mobile and embedded platforms. • Solid understanding of computer architecture and hardware acceleration techniques. • Effective communication skills and the ability to work collaboratively in a team environment. • Bachelor’s or Master’s degree in Computer Science, Electronics Engineering, or a related field.
Apply NowAugust 26
11 - 50
Lead teams in building next-gen cloud applications for technical solutioning.
August 23
201 - 500
Design and develop high-performance software solutions using Java and Spring Boot.
August 23
501 - 1000
Build and test scalable software solutions using DevOps tools and best practices.
August 23
1001 - 5000
Design, develop, and maintain software solutions for digital banking platform.