Jobs
Interviews

KHEYDIGIT

16 Job openings at KHEYDIGIT
Sr Machine Learning Associate Hyderabad, Telangana, India 5 years None Not disclosed Remote Contractual

𝐉𝐨𝐛 𝐓𝐢𝐭𝐥𝐞: 𝐒𝐞𝐧𝐢𝐨𝐫 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐀𝐬𝐬𝐨𝐜𝐢𝐚𝐭𝐞 (𝐂𝐨𝐦𝐩𝐮𝐭𝐞𝐫 𝐕𝐢𝐬𝐢𝐨𝐧) – 𝐑𝐞𝐦𝐨𝐭𝐞 (𝐈𝐧𝐝𝐢𝐚) Location: Remote (Bengaluru, Hyderabad, Chennai) Engagement Type: Long-Term Contract Start Date: ASAP Work Hours: Overlap with EST time zone Dual Employment: Not permitted 𝐀𝐛𝐨𝐮𝐭 𝐭𝐡𝐞 𝐑𝐨𝐥𝐞: KheyDigit Global Solutions Pvt Ltd is seeking a Senior Machine Learning Associate with deep expertise in Computer Vision to support an international pharmaceutical manufacturing project. This role involves building, optimizing, and deploying AI/ML models that enable automated drug manufacturing line clearance using real-time camera feeds and anomaly detection via AWS infrastructure.You will be part of a global team and collaborate directly with international clients. This is a 100% remote opportunity with long-term growth potential. 𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬: 1. Design, develop, and deploy computer vision models using AWS SageMaker. 2. Work on edge computing solutions using AWS Greengrass. 3. Support integration and optimization of models on Nvidia Triton GPU infrastructure. 4. Analyze HD camera feeds to detect anomalies in drug manufacturing line operations. 5. Build and train models to understand “normal” production line behavior and identify deviations. 6. Troubleshoot and enhance real-time AI/ML applications in a live production environment. 7. Collaborate with global technical and product teams; communicate fluently in English. 8. Remain adaptable and solution-oriented in a fast-paced, agile setup. 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬 & 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞: *5+ years of experience in Machine Learning, with a strong focus on Computer Vision. *Expert-level experience in AWS SageMaker and AWS Greengrass. *Experience in deploying models on GPU-based infrastructures such as Nvidia Triton. *Strong problem-solving skills with the ability to debug complex model and deployment issues *Excellent communication skills in English, able to work independently with international stakeholders. *Prior experience working in regulated or manufacturing environments is a plus. *Exposure to AI/ML use cases in Pharmaceutical or Manufacturing sectors. *Familiarity with model versioning, CI/CD pipelines, and MLOps practices. *Understanding of camera-based systems or computer vision libraries like OpenCV, PyTorch, or TensorFlow. 𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐏𝐫𝐨𝐜𝐞𝐬𝐬: 15-minute HR Screening Technical Interview(s) with 1–2 client representatives

Senior Machine Learning Associate (Computer Vision) Greater Hyderabad Area 5 years None Not disclosed Remote Contractual

Job Title: Senior Machine Learning Associate (Computer Vision) – Remote (India) Location: Remote (India) Engagement Type: Long-Term Contract Start Date: ASAP Work Hours: Overlap with EST time zone Dual Employment: Not permitted About the Role: KheyDigit Global Solutions Pvt Ltd is seeking a Senior Machine Learning Associate with deep expertise in Computer Vision to support an international pharmaceutical manufacturing project. This role involves building, optimizing, and deploying AI/ML models that enable automated drug manufacturing line clearance using real-time camera feeds and anomaly detection via AWS infrastructure.You will be part of a global team and collaborate directly with international clients. This is a 100% remote opportunity with long-term growth potential. Key Responsibilities: Design, develop, and deploy computer vision models using AWS SageMaker. Work on edge computing solutions using AWS Greengrass. Support integration and optimization of models on Nvidia Triton GPU infrastructure. Analyze HD camera feeds to detect anomalies in drug manufacturing line operations. Build and train models to understand “normal” production line behavior and identify deviations. Troubleshoot and enhance real-time AI/ML applications in a live production environment. Collaborate with global technical and product teams; communicate fluently in English. Remain adaptable and solution-oriented in a fast-paced, agile setup. Required Skills & Experience: 5+ years of experience in Machine Learning, with a strong focus on Computer Vision. Expert-level experience in AWS SageMaker and AWS Greengrass. Experience in deploying models on GPU-based infrastructures such as Nvidia Triton. Strong problem-solving skills with the ability to debug complex model and deployment issues Excellent communication skills in English, able to work independently with international stakeholders. Prior experience working in regulated or manufacturing environments is a plus. Exposure to AI/ML use cases in Pharmaceutical or Manufacturing sectors. Familiarity with model versioning, CI/CD pipelines, and MLOps practices. Understanding of camera-based systems or computer vision libraries like OpenCV, PyTorch, or TensorFlow. Interview Process: 15-minute HR Screening Technical Interview(s) with 1–2 client representatives Why Join Us: Global exposure and collaboration with international teams Long-term remote engagement with flexible working Opportunity to contribute to impactful AI-driven solutions in drug manufacturing Transparent and professional work culture

Snowflake Matillion Architect India 8 years None Not disclosed Remote Contractual

Job Title : Snowflake Matillion Architect Location : Remote (Only from India) Employment Type : Long-Term Contract Start Date : ASAP Working Hours : Daily overlap with CET Dual Employment : Strictly not permitted; any such engagements must be terminated before onboarding. About the Role We are looking for an 8-10 years experienced and certified Snowflake Architect to join our international team. As a senior resource, you will play a critical role in designing and implementing robust, scalable, and secure data architecture solutions. This is a hands-on architecture position, ideal for someone with deep technical knowledge of Snowflake and Matillion, and a strong foundation in data modeling and analytics. The role requires collaboration with cross-functional teams and global stakeholders to understand business needs and transform them into effective technical solutions. Key Responsibilities Lead the design and development of enterprise-level data architecture using Snowflake. Define and implement best practices for data modeling, data integration, and performance tuning. Develop and maintain scalable ETL/ELT pipelines using Matillion. Collaborate with business and technical stakeholders to understand data needs and deliver robust solutions. Ensure data security, governance, and compliance best practices are integrated into architecture. Contribute to the continuous improvement of data architecture frameworks and methodologies. Provide expert-level troubleshooting and performance optimization support. Mentor junior data engineers or developers as needed. Minimum Qualifications & Requirements 5+ years of proven experience as a Data Architect or in a similar technical leadership role. Expert-level proficiency in Snowflake , with a Snowflake Advanced Data Architect Certification . Strong experience in ETL/ELT development using Matillion , with Matillion certification . Solid understanding and hands-on experience in data modeling techniques (star schema, snowflake schema, normalization, etc.). Advanced knowledge of data analytics and business intelligence processes. Proficiency in working with large-scale datasets and optimizing data pipelines. Excellent verbal and written communication skills in English —you will be interacting directly with an international client base. Ability to work independently and deliver high-quality results in a fully remote setting. Technology Stack Primary Technology : Snowflake (Expert) Secondary Technology : Matillion Skill Areas : Data Management & Analytics Data Modeling ETL/ELT Design Performance Tuning and Optimization Soft Skills Required Fluent English communication (both written and verbal) Ability to effectively present and explain complex data architectures to non-technical stakeholders Strong analytical and problem-solving skills Self-motivated and disciplined to work remotely with international teams Collaborative attitude and a consultative approach Interview Process Initial Screening : 15-minute HR conversation Technical Assessment : May include a practical test or case study Client Interview : Technical deep-dive with one or two client-side stakeholders

Senior Microstrategy developer Hyderabad, Telangana, India 5 - 8 years None Not disclosed Remote Contractual

𝐉𝐨𝐛 𝐓𝐢𝐭𝐥𝐞: 𝐌𝐢𝐜𝐫𝐨𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐲 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 Location: Remote (Hyderabad, Chennai, bengaluru, Pune) Experience: 5-8 years only Start Date: Immediate to 15 days Duration: Long Term Contract Work Hours Overlap: Required overlap with Canada – EST Time 𝐃𝐮𝐚𝐥 𝐄𝐦𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭: 𝐍𝐨𝐭 𝐚𝐥𝐥𝐨𝐰𝐞𝐝. 𝐀𝐧𝐲 𝐞𝐱𝐢𝐬𝐭𝐢𝐧𝐠 𝐝𝐮𝐚𝐥 𝐞𝐦𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 𝐦𝐮𝐬𝐭 𝐛𝐞 𝐭𝐞𝐫𝐦𝐢𝐧𝐚𝐭𝐞𝐝 𝐢𝐦𝐦𝐞𝐝𝐢𝐚𝐭𝐞𝐥𝐲 𝐮𝐩𝐨𝐧 𝐞𝐧𝐠𝐚𝐠𝐞𝐦𝐞𝐧𝐭. Required Technical Skills (Must-Have): ✅ Expertise in SQL: Ability to write complex queries, perform performance tuning, and work with large datasets. ✅ Hands-on Experience with MicroStrategy (MSTR): Especially with newer versions, including the latest enhancements and features. ✅ MicroStrategy Schema Development: Deep understanding of MSTR architecture, logical and physical schema design, and implementation best practices. ✅ Dashboard and Dossier Development: Proven experience in creating high-quality, interactive dashboards, reports, and Dossiers that are business-user friendly and meet data visualization standards. ✅ Report Development & Data Modeling: Ability to build sophisticated reports, build models with attributes, facts, metrics, and hierarchies for efficient analytics. ✅ Performance Optimization: Skilled in optimizing dashboards and reports for performance, including intelligent use of caching and aggregation. ✅ Business Interaction: Comfortable working directly with business stakeholders to gather requirements, iterate on dashboards, and deliver actionable insights. ✅ Understanding of Data Warehousing Concepts: Knowledge of data modeling, ETL, and BI integration concepts. Soft Skills: 💬 Fluent in English: Excellent verbal and written communication skills to work closely with an international team and business users. Preferred/Optional Tech Stack (Nice to Have): Experience with other BI tools such as Power BI, Tableau, or Qlik. Familiarity with ETL tools and Data Integration platforms. Knowledge of cloud platforms (AWS, Azure, GCP). Experience in Agile/Scrum development environments. Please share your profiles only if you match the above JD at hiring@khey-digit.com resource should be only from India.

AWS BI Architect Hyderabad, Telangana, India 6 years None Not disclosed On-site Contractual

𝐑𝐨𝐥𝐞: 𝐀𝐖𝐒 𝐁𝐈 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭 𝐋𝐨𝐜𝐚𝐭𝐢𝐨𝐧: 𝐇𝐲𝐝𝐞𝐫𝐚𝐛𝐚𝐝/𝐁𝐞𝐧𝐠𝐚𝐥𝐮𝐫𝐮/𝐂𝐡𝐞𝐧𝐧𝐚𝐢 𝐍𝐨𝐭𝐢𝐜𝐞 𝐏𝐞𝐫𝐢𝐨𝐝: 𝟐𝟎 𝐝𝐚𝐲𝐬 𝐨𝐫 𝐥𝐞𝐬𝐬 We are looking for a seasoned AWS BI Architect with 6+ years of experience. 𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬: *Design scalable data platform and analytics architectures. *Lead technical design discussions and ensure successful implementation. *Create clear, detailed documentation and collaborate with stakeholders. 𝐖𝐡𝐚𝐭 𝐖𝐞’𝐫𝐞 𝐋𝐨𝐨𝐤𝐢𝐧𝐠 𝐅𝐨𝐫: *6+ years of experience in BI and data architecture. *Strong hands-on expertise in Amazon Redshift (Serverless and/or Provisioned) *Experience with Redshift performance tuning, RPU sizing, workload isolation, and concurrency scaling. *Familiarity with Redshift Data Sharing and cross-cluster access. *Background in BI/reporting with a strong preference for MicroStrategy. *Excellent communication and documentation.

AWS BI Architect India 6 years None Not disclosed On-site Contractual

𝐑𝐨𝐥𝐞: 𝐀𝐖𝐒 𝐁𝐈 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭 𝐋𝐨𝐜𝐚𝐭𝐢𝐨𝐧: 𝐇𝐲𝐝𝐞𝐫𝐚𝐛𝐚𝐝/𝐁𝐞𝐧𝐠𝐚𝐥𝐮𝐫𝐮/𝐂𝐡𝐞𝐧𝐧𝐚𝐢/Pune 𝐍𝐨𝐭𝐢𝐜𝐞 𝐏𝐞𝐫𝐢𝐨𝐝: Immediate We are looking for a seasoned AWS BI Architect with 6+ years of experience. 𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬: *Design scalable data platform and analytics architectures. *Lead technical design discussions and ensure successful implementation. *Create clear, detailed documentation and collaborate with stakeholders. 𝐖𝐡𝐚𝐭 𝐖𝐞’𝐫𝐞 𝐋𝐨𝐨𝐤𝐢𝐧𝐠 𝐅𝐨𝐫: *6+ years of experience in BI and data architecture. *Strong hands-on expertise in Amazon Redshift (Serverless and/or Provisioned) *Experience with Redshift performance tuning, RPU sizing, workload isolation, and concurrency scaling. *Familiarity with Redshift Data Sharing and cross-cluster access. *Background in BI/reporting with a strong preference for MicroStrategy. *Excellent communication and documentation. Please share your profiles with us only if you match the JD at hiring@khey-digit.com

Sr ML Associate (Computer Vision) Hyderabad, Telangana, India 5 years None Not disclosed Remote Contractual

𝐉𝐨𝐛 𝐓𝐢𝐭𝐥𝐞: 𝐒𝐞𝐧𝐢𝐨𝐫 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐀𝐬𝐬𝐨𝐜𝐢𝐚𝐭𝐞 (𝐂𝐨𝐦𝐩𝐮𝐭𝐞𝐫 𝐕𝐢𝐬𝐢𝐨𝐧) – 𝐑𝐞𝐦𝐨𝐭𝐞 (𝐈𝐧𝐝𝐢𝐚) Location: Remote (Bengaluru, Hyderabad, Chennai) Engagement Type: Long-Term Contract Start Date: ASAP Work Hours: Overlap with EST time zone Dual Employment: Not permitted 𝐀𝐛𝐨𝐮𝐭 𝐭𝐡𝐞 𝐑𝐨𝐥𝐞: KheyDigit Global Solutions Pvt Ltd is seeking a Senior Machine Learning Associate with deep expertise in Computer Vision to support an international pharmaceutical manufacturing project. This role involves building, optimizing, and deploying AI/ML models that enable automated drug manufacturing line clearance using real-time camera feeds and anomaly detection via AWS infrastructure.You will be part of a global team and collaborate directly with international clients. This is a 100% remote opportunity with long-term growth potential. 𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬: 1. Design, develop, and deploy computer vision models using AWS SageMaker. 2. Work on edge computing solutions using AWS Greengrass. 3. Support integration and optimization of models on Nvidia Triton GPU infrastructure. 4. Analyze HD camera feeds to detect anomalies in drug manufacturing line operations. 5. Build and train models to understand “normal” production line behavior and identify deviations. 6. Troubleshoot and enhance real-time AI/ML applications in a live production environment. 7. Collaborate with global technical and product teams; communicate fluently in English. 8. Remain adaptable and solution-oriented in a fast-paced, agile setup. 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬 & 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞: *5+ years of experience in Machine Learning, with a strong focus on Computer Vision. *Expert-level experience in AWS SageMaker and AWS Greengrass. *Experience in deploying models on GPU-based infrastructures such as Nvidia Triton. *Strong problem-solving skills with the ability to debug complex model and deployment issues *Excellent communication skills in English, able to work independently with international stakeholders. *Prior experience working in regulated or manufacturing environments is a plus. *Exposure to AI/ML use cases in Pharmaceutical or Manufacturing sectors. *Familiarity with model versioning, CI/CD pipelines, and MLOps practices. *Understanding of camera-based systems or computer vision libraries like OpenCV, PyTorch, or TensorFlow. 𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐏𝐫𝐨𝐜𝐞𝐬𝐬: 15-minute HR Screening Technical Interview(s) with 1–2 client representatives

Delivery & Operations Head - Data Science & AI Hyderabad, Telangana, India 10 years None Not disclosed On-site Contractual

𝐉𝐨𝐛 𝐓𝐢𝐭𝐥𝐞: 𝐃𝐞𝐥𝐢𝐯𝐞𝐫𝐲 & 𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬 𝐇𝐞𝐚𝐝 – 𝐃𝐚𝐭𝐚 𝐒𝐜𝐢𝐞𝐧𝐜𝐞 & 𝐀𝐈 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞: 𝟏𝟎+ 𝐘𝐞𝐚𝐫𝐬 𝐋𝐨𝐜𝐚𝐭𝐢𝐨𝐧: 𝐇𝐲𝐝𝐞𝐫𝐚𝐛𝐚𝐝 (𝐎𝐧𝐬𝐢𝐭𝐞) 𝐒𝐚𝐥𝐚𝐫𝐲: 𝐂𝐨𝐦𝐩𝐞𝐭𝐢𝐭𝐢𝐯𝐞 – 𝐁𝐞𝐬𝐭 𝐢𝐧 𝐈𝐧𝐝𝐮𝐬𝐭𝐫𝐲 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐑𝐢𝐠𝐡𝐭 𝐂𝐚𝐧𝐝𝐢𝐝𝐚𝐭𝐞 𝐍𝐨𝐭𝐢𝐜𝐞 𝐏𝐞𝐫𝐢𝐨𝐝: 𝐈𝐦𝐦𝐞𝐝𝐢𝐚𝐭𝐞 𝐉𝐨𝐢𝐧𝐞𝐫𝐬 𝐏𝐫𝐞𝐟𝐞𝐫𝐫𝐞𝐝 𝐀𝐛𝐨𝐮𝐭 𝐭𝐡𝐞 𝐑𝐨𝐥𝐞: We are seeking an experienced and dynamic Delivery and Operations Head with a strong background in Data Science, AI, and Machine Learning domains. The ideal candidate will be responsible for overseeing end-to-end client onboarding, project delivery, team management, and operations. The role demands excellent leadership skills, client engagement expertise, and hands-on experience in managing large-scale data projects. 𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬: *Lead project delivery, operations, and client onboarding across AI, ML, and Data Science projects. *Manage and scale teams of 100+ technical resources, ensuring performance and delivery excellence. *Drive client acquisition, relationship management, and account expansion. *Ensure timely and high-quality delivery of data-centric projects aligned with client expectations. *Define, track, and meet SLAs, delivery metrics, and operational goals. *Collaborate with cross-functional teams to ensure resource planning, execution, and project governance. *Act as the escalation point for client issues and work on proactive resolution strategies. *Optimize operational processes and implement best practices across delivery functions. 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬 & 𝐐𝐮𝐚𝐥𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬: 1. 10+ years of experience in IT delivery and operations, preferably in AI/ML/Data Science domains. 2. Proven track record in managing data projects end-to-end. 3. Strong experience in client engagement, onboarding, and account management. 4. Demonstrated ability to lead large teams (100+ resources) effectively. 5. Excellent communication, negotiation, and leadership skills. 6. Experience in establishing and maintaining delivery frameworks and operations excellence. 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐐𝐮𝐚𝐥𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬: Bachelor’s Degree from a reputed institution (Engineering/Technology preferred). Postgraduate Degree in Data Science/Analytics – preferred and considered an advantage. Apply Now: Share your profile only if you match the JD with hiring@khey-digit.com

GCP Data Engineer India 8 years None Not disclosed Remote Contractual

Job Title: GCP Data Engineer Location: Remote (Only From India) Employment Type: Contract Long-Term Start Date: Immediate Time Zone Overlap: Must be available to work during EST hours (Canada) Dual Employment: Not permitted – must be terminated if applicable About the Role: We are looking for a highly skilled GCP Data Engineer to join our international team. The ideal candidate will have strong experience with Google Cloud Platform's data tools, particularly DataProc and BigQuery, and will be comfortable working in a remote, collaborative environment. You will play a key role in designing, building, and optimizing data pipelines and infrastructure that drive business insights. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes on GCP Leverage GCP DataProc and BigQuery to process and analyze large volumes of data Write efficient, maintainable code using Python and SQL Develop Spark-based data workflows using PySpark Collaborate with cross-functional teams in an international environment Ensure data quality, integrity, and security Participate in code reviews and optimize system performance Required Qualifications: 5–8 years of hands-on experience in Data Engineering Proven expertise in GCP DataProc and BigQuery Strong programming skills in Python and SQL Solid experience with PySpark for distributed data processing Fluent English with excellent communication skills Ability to work independently in a remote team environment Comfortable with working during Canada EST time zone overlap Optional / Nice-to-Have Skills: Experience with additional GCP tools and services Familiarity with CI/CD for data engineering workflows Exposure to data governance and data security best practices Interview Process: Technical Test (Online screening) 15-minute HR Interview Technical Interview with 1–2 rounds Please share only if you match above JD at hiring@khey-digit.com

Associate (ML & Deep Learning) karnataka 3 - 7 years INR Not disclosed On-site Full Time

You will be joining a European-based MNC that is transitioning its operations to Bangalore in a Hybrid work mode. As part of the team, you will need to demonstrate expertise in Core Machine Learning & Deep Learning. This includes a solid grasp of Transformer architectures such as ViT, CLIP, and BERT, as well as experience with contrastive learning techniques like SimCLR, MoCo, and CLIP. Additionally, knowledge of temporal embeddings and sequence modeling for video, fine-tuning large pre-trained models, and a comprehensive understanding of Deep Learning concepts will be essential for this role. Your technical and engineering skills should include proficiency in Python and deep learning frameworks like PyTorch (preferred) or TensorFlow. Familiarity with StreetClip or similar multimodal models, experience with video data pipelines, and the ability to optimize model training and inference for performance and scalability will be key responsibilities. In terms of data and evaluation, you should have experience working with large-scale video datasets such as Kinetics, HowTo100M, etc. Knowledge of evaluation metrics for video understanding and retrieval tasks, familiarity with embedding spaces, similarity metrics, and retrieval systems will also be required. It would be beneficial to have experience with multimodal learning (vision + text), familiarity with distributed training and model deployment, as well as contributions to open-source projects or publications in relevant areas. Ideally, you should hold a degree in Computer Science, Machine Learning, or a related field (Masters or PhD preferred) and have at least 3 years of experience in deep learning or computer vision projects. If you meet these qualifications and skills, please share your profiles at hiring@khey-digit.com.,

Sr MongoDB Developer Hyderabad, Telangana, India 6 years None Not disclosed Remote Contractual

Job Title: Senior MongoDB Developer / Data Modeler Experience: 6-12 years Notice Period- Immediate to 15 Days Location: Remote (Only from India) Duration: Contract-Long Term Engagement About the Role: We are seeking a Senior MongoDB Developer / Data Modeler to support an international client transitioning their backend infrastructure to MongoDB. This role is advisory in nature and requires a high level of expertise in MongoDB data architecture, hybrid data solutions, and best practices in large-scale modelling. Key Responsibilities: • Evaluate whether MongoDB is the appropriate choice for the client’s backend architecture. • Provide expert-level guidance on data modelling strategies in MongoDB. • Recommend optimal architecture approaches, including hybrid or alternative data solutions (e.g., MongoDB + Snowflake). • Challenge and validate existing architectural decisions to ensure scalability and efficiency. • Act as a strategic consultant, providing clear documentation and recommendations to business and technical stakeholders. Required Technical Skills: • Senior-level expertise in MongoDB, with proven experience in complex data modelling and architecture. • Strong understanding of NoSQL vs RDBMS trade-offs and use-case suitability. • Familiarity with hybrid data architectures (e.g., integrating MongoDB with Snowflake or traditional RDBMS). • Experience in large-scale data modelling, preferably in manufacturing or supply chain domains (e.g., BOM data structures). • Comfortable working in an advisory/consulting capacity without hands-on development, if required. Optional / Nice-to-Have: • Experience in hybrid cloud environments and designing scalable data platforms. • Exposure to performance optimization strategies for large NoSQL datasets. Soft Skills & Communication: • Fluent English communication skills are a must, to effectively engage with international teams. • Ability to communicate clearly and confidently with technical and non-technical stakeholders. • A proactive and independent mindset with strong problem-solving capabilities. Interview Process: • Step 1: 15-minute HR screening. • Step 2: Technical interview with 1–2 client stakeholders. Working Hours & Collaboration: • Requires standard daily overlap with Germany (CET). Additional Notes: • Dual Employment: Not permitted. If currently engaged in dual employment, it must be terminated upon selection for this role. Apply only if you match the JD at hiring@khey-digit.com .

Senior (GCP) Data Engineer hyderabad, telangana 6 - 10 years INR Not disclosed On-site Full Time

As a GCP Data Engineer, you will be an integral part of our international team, utilizing your expertise in Google Cloud Platform's data tools, specifically DataProc and BigQuery. Your primary focus will be on designing, developing, and optimizing data pipelines and infrastructure to enhance business insights. This role requires strong collaboration skills as you will work remotely with cross-functional teams. Your responsibilities will include designing and maintaining scalable data pipelines and ETL processes on GCP, utilizing DataProc and BigQuery for processing and analyzing large data volumes, writing efficient code in Python and SQL, and developing Spark-based data workflows with PySpark. Ensuring data quality, integrity, and security, participating in code reviews, and optimizing system performance will be crucial aspects of your role. To be successful in this position, you should have a minimum of 5 years of hands-on experience in Data Engineering, proven expertise in GCP DataProc and BigQuery, strong programming skills in Python and SQL, and solid experience with PySpark. Additionally, fluency in English with excellent communication skills, the ability to work independently in a remote team environment, and comfort working during Canada EST time zone overlap are essential. Nice-to-have skills include experience with other GCP tools and services, familiarity with CI/CD for data engineering workflows, and exposure to data governance and data security best practices. The interview process will consist of an online technical test, a 15-minute HR interview, and a technical interview with 12 rounds. If you meet the above requirements and are ready to take on this exciting opportunity, please reach out to us at hiring@khey-digit.com.,

Azure Data Architect india 10 years None Not disclosed Remote Contractual

Title: Azure Data Architect Location: India Only (Remote) Seniority: Senior/Architect Duration: Long Term Soft Skills: Strong communication skills; fluent English preferred Interviews: HR interview and 1-2 technical interviews Overlap/Timings: 7.00 AM – 4.00 PM IST but this is based on project demand. Office: 100% Remote Work Dual Employment: Client does not allow dual employment; must be terminated if applicable Job Description As an Azure Data Architect, your responsibilities are to: Design and implement comprehensive data solutions with Azure Databricks, focusing on scalability, performance, and cost savings. Lead the development of data mesh, data lake, and data warehouse architectures customized for the manufacturing sector. Work closely with cross-functional teams to capture requirements and convert them into detailed technical designs and practical solutions. Offer hands-on support in data ingestion, transformation, and visualization using Azure tools. Enhance Databricks clusters and jobs for optimal performance and cost efficiency, following Spark and PySpark best practices. Ensure adherence to data governance, security protocols, and disaster recovery standards within Azure. Create and update technical design documents that align with business goals. Guide development teams on CI/CD processes, data engineering, and analytics best practices. Profile Requirements For this position of Azure Data Architect, we are looking for someone with: 10+ years of experience as a Data Architect, specializing in cloud-based data platforms. 4+ years of hands-on experience with Azure technologies such as Data Factory, Databricks, and Data Lake Storage Expertise in the manufacturing sector. Skilled in using Apache Spark, PySpark, and SQL for large-scale data processing tasks. Thorough understanding of Azure Databricks architecture, including control and compute plane functionalities. Experienced in designing secure networking configurations and governance strategies for Databricks environments. Familiarity with data warehousing approaches like Kimball, Inmon, and Data Vault. Practical experience with CI/CD pipelines for data solutions. Knowledge of SAP ERP is a significant plus. A bachelor’s degree in computer science, Information Technology, or a related discipline. Strong analytical and problem-solving abilities complemented by excellent communication skills to connect technical and non-technical stakeholders. Experience in maintaining a collaborative approach focused on ongoing improvement.

Teradata Developer india 5 years None Not disclosed Remote Contractual

Job Title: Senior Teradata Developer Location: Remote (Only From India) Engagement: Long-Term Work Hours: Overlap until 3:00 PM EST preferred Dual Employment: Not allowed. Role Overview: We are looking for a Senior Teradata Developer with strong expertise in Teradata, BigQuery, and MicroStrategy to support enterprise-level data management and analytics initiatives. Requirements: *5+ years of experience as a Teradata Developer. *Experience with BigQuery and MicroStrategy. *Strong communication skills: fluent English preferred. Additional Info: Interviews: HR + 1–2 Technical Rounds. Interested? Share your profiles only if you match the JD at hiring@khey-digit.com #HiringAlert #ImmediateHiring #TeradataDeveloper #ETL #DataWarehousing #DataModeling #BigQuery #MicroStrateg

Azure Data Architect india 10 years None Not disclosed Remote Contractual

Job Title: Azure Data Architect Location: India Only (Remote) Seniority: Senior/Architect Duration: Long Term Soft Skills: Strong communication skills; fluent English preferred Interviews: HR interview and 1-2 technical interviews Overlap/Timings: 7.00 AM – 4.00 PM IST but this is based on project demand. Office: 100% Remote Work Dual Employment: Client does not allow dual employment; must be terminated if applicable Job Description: As an Azure Data Architect, your responsibilities are to: *Design and implement comprehensive data solutions with Azure Databricks, focusing on scalability, performance, and cost savings. *Lead the development of data mesh, data lake, and data warehouse architectures customized for the manufacturing sector. *Work closely with cross-functional teams to capture requirements and convert them into detailed technical designs and practical solutions. *Offer hands-on support in data ingestion, transformation, and visualization using Azure tools. *Enhance Databricks clusters and jobs for optimal performance and cost efficiency, following Spark and PySpark best practices. *Ensure adherence to data governance, security protocols, and disaster recovery standards within Azure. *Create and update technical design documents that align with business goals. *Guide development teams on CI/CD processes, data engineering, and analytics best practices. Profile Requirements For this position of Azure Data Architect, we are looking for someone with: 1. 10+ years of experience as a Data Architect, specializing in cloud-based data platforms. 2. 4+ years of hands-on experience with Azure technologies such as Data Factory, Databricks, and Data Lake Storage 3. Expertise in the manufacturing sector. Skilled in using Apache Spark, PySpark, and SQL for large-scale data processing tasks. 4. Thorough understanding of Azure Databricks architecture, including control and compute plane functionalities. 5. Experienced in designing secure networking configurations and governance strategies for Databricks environments. 6. Familiarity with data warehousing approaches like Kimball, Inmon, and Data Vault. 7. Practical experience with CI/CD pipelines for data solutions. 8. Knowledge of SAP ERP is a significant plus. 9. A bachelor’s degree in computer science, Information Technology, or a related discipline. 10. Strong analytical and problem-solving abilities complemented by excellent communication skills to connect technical and non-technical stakeholders. 11. Experience in maintaining a collaborative approach focused on ongoing improvement. Apply Now! Share your profile only if you match the JD at hiring@khey-digit.com. #HiringAlert #UrgentHiring #AzureDataArchitect #SeniorArchitect #Databricks #DataFactory #DataLakeStorage #DataWarehousing #PySpark #SQL #KheyDigit mi

Teradata Developer hyderabad, telangana, india 5 years None Not disclosed Remote Contractual

Job Title: Senior Teradata Developer Location: Remote (Only From India) Engagement: Long-Term Work Hours: Overlap until 3:00 PM EST preferred Dual Employment: Not allowed. Role Overview: We are looking for a Senior Teradata Developer with strong expertise in Teradata, BigQuery, and MicroStrategy to support enterprise-level data management and analytics initiatives. Requirements: *5+ years of experience as a Teradata Developer. *Experience with BigQuery and MicroStrategy. *Strong communication skills: fluent English preferred. Additional Info: Interviews: HR + 1–2 Technical Rounds. Share your profiles only if you match the JD at hiring@khey-digit.com