Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description We’re looking for a Jr AI Security Architect to join our growing Security Architecture team. This role will support the design, implementation, and protection of AI/ML systems, models, and datasets. The ideal candidate is passionate about the intersection of artificial intelligence and cybersecurity, and eager to contribute to building secure-by-design AI systems that protect users, data, and business integrity. Key Responsibilities Secure AI Model Development - Partner with AI/ML teams to embed security into the model development lifecycle, including during data collection, model training, evaluation, and deployment. - Contribute to threat modeling exercises for AI/ML pipelines to identify risks such as model poisoning, data leakage, or adversarial input attacks. - Support the evaluation and implementation of model explainability, fairness, and accountability techniques to address security and compliance concerns. - Develop and train internal models for security purposes Model Training & Dataset Security - Help design controls to ensure the integrity and confidentiality of training datasets, including the use of differential privacy, data validation pipelines, and access controls. - Assist in implementing secure storage and version control practices for datasets and model artifacts. - Evaluate training environments for exposure to risks such as unauthorized data access, insecure third-party libraries, or compromised containers. AI Infrastructure Hardening - Work with infrastructure and MLOps teams to secure AI platforms (e.g., MLFlow, Kubeflow, SageMaker, Vertex AI) including compute resources, APIs, CI/CD pipelines, and model registries. - Contribute to security reviews of AI-related deployments in cloud and on-prem environments. - Assist in automating security checks in AI pipelines, such as scanning for secrets, validating container images, and enforcing secure permissions. Secure AI Integration in Products - Participate in the review and assessment of AI/ML models embedded into customer-facing products to ensure they comply with internal security and responsible AI guidelines. - Help develop misuse detection and monitoring strategies to identify model abuse (e.g., prompt injection, data extraction, hallucination exploitation). - Support product security teams in designing guardrails and sandboxing techniques for generative AI features (e.g., chatbots, image generators, copilots). Knowledge Sharing & Enablement - Assist in creating internal training and security guidance for data scientists, engineers, and developers on secure AI practices. - Help maintain documentation, runbooks, and security checklists specific to AI/ML workloads. - Stay current on emerging AI security threats, industry trends, and tools; contribute to internal knowledge sharing. Qualifications - 3-4 years of experience in LLM and 7-10 years of experience in cybersecurity, machine learning, or related fields. - Familiarity with ML frameworks (e.g., PyTorch, TensorFlow) and MLOps tools (e.g., MLFlow, Airflow, Kubernetes). - Familiarity with AI models and Supplychain risks - Understanding of common AI/ML security threats and mitigations (e.g., model inversion, adversarial examples, data poisoning). - Experience working with cloud environments (AWS, GCP, Azure) and securing workloads. - Some knowledge of responsible AI principles, privacy-preserving ML, or AI compliance frameworks is a plus. Soft Skills - Strong communication skills to collaborate across engineering, data science, and product teams. - A continuous learning mindset and willingness to grow in both AI and security domains. - Problem-solving approach with a focus on practical, scalable solutions. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less
Posted 3 weeks ago
3.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description We’re looking for a Jr AI Security Architect to join our growing Security Architecture team. This role will support the design, implementation, and protection of AI/ML systems, models, and datasets. The ideal candidate is passionate about the intersection of artificial intelligence and cybersecurity, and eager to contribute to building secure-by-design AI systems that protect users, data, and business integrity. Key Responsibilities Secure AI Model Development - Partner with AI/ML teams to embed security into the model development lifecycle, including during data collection, model training, evaluation, and deployment. - Contribute to threat modeling exercises for AI/ML pipelines to identify risks such as model poisoning, data leakage, or adversarial input attacks. - Support the evaluation and implementation of model explainability, fairness, and accountability techniques to address security and compliance concerns. - Develop and train internal models for security purposes Model Training & Dataset Security - Help design controls to ensure the integrity and confidentiality of training datasets, including the use of differential privacy, data validation pipelines, and access controls. - Assist in implementing secure storage and version control practices for datasets and model artifacts. - Evaluate training environments for exposure to risks such as unauthorized data access, insecure third-party libraries, or compromised containers. AI Infrastructure Hardening - Work with infrastructure and MLOps teams to secure AI platforms (e.g., MLFlow, Kubeflow, SageMaker, Vertex AI) including compute resources, APIs, CI/CD pipelines, and model registries. - Contribute to security reviews of AI-related deployments in cloud and on-prem environments. - Assist in automating security checks in AI pipelines, such as scanning for secrets, validating container images, and enforcing secure permissions. Secure AI Integration in Products - Participate in the review and assessment of AI/ML models embedded into customer-facing products to ensure they comply with internal security and responsible AI guidelines. - Help develop misuse detection and monitoring strategies to identify model abuse (e.g., prompt injection, data extraction, hallucination exploitation). - Support product security teams in designing guardrails and sandboxing techniques for generative AI features (e.g., chatbots, image generators, copilots). Knowledge Sharing & Enablement - Assist in creating internal training and security guidance for data scientists, engineers, and developers on secure AI practices. - Help maintain documentation, runbooks, and security checklists specific to AI/ML workloads. - Stay current on emerging AI security threats, industry trends, and tools; contribute to internal knowledge sharing. Qualifications - 3-4 years of experience in LLM and 7-10 years of experience in cybersecurity, machine learning, or related fields. - Familiarity with ML frameworks (e.g., PyTorch, TensorFlow) and MLOps tools (e.g., MLFlow, Airflow, Kubernetes). - Familiarity with AI models and Supplychain risks - Understanding of common AI/ML security threats and mitigations (e.g., model inversion, adversarial examples, data poisoning). - Experience working with cloud environments (AWS, GCP, Azure) and securing workloads. - Some knowledge of responsible AI principles, privacy-preserving ML, or AI compliance frameworks is a plus. Soft Skills - Strong communication skills to collaborate across engineering, data science, and product teams. - A continuous learning mindset and willingness to grow in both AI and security domains. - Problem-solving approach with a focus on practical, scalable solutions. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Keen on working directly with our clients and developing innovative ML and LLM Solutions? Want to be part of our APAC Center of Excellence team delivering some of the most innovative solutions in Data & AI? Ready to join a growing company that has won Microsoft Partner Of The Year for Data & AI ? Practical Information : Location: Delhi/Noida/Mumbai/Bangalore/Chennai, India | Reports to: Director of CoE Data & AI | Language Requirements: Professional English, written and verbal | Work Arrangement: Hybrid Join our CoE team as our new ML/LLM Engineer where you'll be driving the development, deployment, and optimization of Large Language Models and machine learning solutions for production environments, both on-premises and on Cloud platforms. You will master your role with key performance indicators (KPIs) that revolve around successful implementation and efficiency of these solutions . Other responsibilities will include: Collaborating closely with clients to understand their requirements and delivering custom solutions Extending prototypes and enhancing them into robust and scalable solutions Designing and fine-tuning large language models tailored for specific applications Efficiently managing large language models in production environment s to facilitate close to real-time solutions Developing and executing ML pipelines, deploying models to enhance accuracy across various process steps within our pipeline Your Competencies: 3+ years of experience in NLP or a similar role Understanding of data structures, data modeling, ML algorithms, and software architecture Experience coding in Python with ML frameworks (e.g., Keras or PyTorch) and libraries (e.g., scikit-learn) Practical experience with ML Cloud services on Azure or AWS (e.g., Azure ML, Amazon SageMaker, or MLFlow) About You: Being learning-oriented with a dedication to staying updated on cutting-edge advancements in large language models and NLP Demonstrating strong analytical and problem-solving skills Exhibiting excellent communication skills What's in it for you: Medical, and life insurance Hybrid workplace Internet & Mobile reimbursement Upskilling through certifications and training At Crayon, we are deeply committed to fostering a culture of diversity, equity, inclusion, and belonging (DEIB). We believe that diversity in all its forms strengthens our team and enhances innovation and effectiveness. We welcome applications from individuals of all backgrounds, regardless of race, colour, age, origin, religion, sexual orientation, gender (identity), genetic information, neurodiversity, disability, or any other basis protected by local laws and regulations. When filling vacancies, we prioritize equally qualified candidates who bring diverse backgrounds and experiences, helping to enrich our team dynamics and foster a vibrant, inclusive work environment. If you require any assistance or reasonable accommodation during the application process, please let us know. Apply to join an award-winning employer! Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
TCS HIRING!! ROLE: AWS Data architect LOCATION: HYDERABAD YEAR OF EXP: 8 + YEARS Data Architect: Must have: Relational SQL/ Caching expertise – Deep knowledge of Amazon Aurora PostgreSQL, ElastiCache etc.. Data modeling – Experience in OLTP and OLAP schemas, normalization, denormalization, indexing, and partitioning. Schema design & migration – Defining best practices for schema evolution when migrating from SQL Server to PostgreSQL. Data governance – Designing data lifecycle policies, archival strategies, and regulatory compliance frameworks. AWS Glue & AWS DMS – Leading data migration strategies to Aurora PostgreSQL. ETL & Data Pipelines – Expertise in Extract, Transform, Load (ETL) workflows . Glue jobs features and event-driven architectures. Data transformation & mapping – PostgreSQL PL/pgSQL migration / transformation expertise while ensuring data integrity. Cross-platform data integration – Connecting cloud and on-premises / other cloud data sources. AWS Data Services – Strong experience in S3, Glue, Lambda, Redshift, Athena, and Kinesis. Infrastructure as Code (IaC) – Using Terraform, CloudFormation, or AWS CDK for database provisioning. Security & Compliance – Implementing IAM, encryption (AWS KMS), access control policies, and compliance frameworks (eg. GDPR ,PII). Query tuning & indexing strategies – Optimizing queries for high performance. Capacity planning & scaling – Ensuring high availability, failover mechanisms, and auto-scaling strategies. Data partitioning & storage optimization – Designing cost-efficient hot/cold data storage policies. Should have experience with setting up the AWS architecture as per the project requirements Good to have: Data Warehousing – Expertise in Amazon Redshift, Snowflake, or BigQuery. Big Data Processing – Familiarity with Apache Spark, EMR, Hadoop, or Kinesis. Data Lakes & Analytics – Experience in AWS Lake Formation, Glue Catalog, and Athena. Machine Learning Pipelines – Understanding of SageMaker, BedRock etc. for AI-driven analytics. CI/CD for Data Pipelines – Knowledge of AWS CodePipeline, Jenkins, or GitHub Actions. Serverless Data Architectures – Experience with event-driven systems (SNS, SQS, Step Functions). Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
Job Title: AI Engineer Job Type: Full-time, Contractor Location: Remote About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary Join our customer's team as an AI Engineer and play a pivotal role in shaping next-generation AI solutions. You will leverage cutting-edge technologies such as GenAI, LLMs, RAG, and LangChain to develop scalable, innovative models and systems. This is a unique opportunity for someone who is passionate about rapidly advancing their AI expertise and thrives in a collaborative, remote-first environment. Key Responsibilities Design and develop advanced AI models and algorithms using GenAI, LLMs, RAG, LangChain, LangGraph, and AI Agent frameworks. Implement, deploy, and optimize AI solutions on Amazon SageMaker. Collaborate cross-functionally to integrate AI models into existing platforms and workflows. Continuously evaluate the latest AI research and tools to ensure leading-edge technology adoption. Document processes, experiments, and model performance with clear and concise written communication. Troubleshoot, refine, and scale deployed AI solutions for efficiency and reliability. Engage proactively with the customer's team to understand business needs and deliver value-driven AI innovations. Required Skills and Qualifications Proven hands-on experience with GenAI, Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) techniques. Strong proficiency in frameworks such as LangChain, LangGraph, and building/resolving AI Agents. Demonstrated expertise in deploying and managing AI/ML solutions on AWS SageMaker. Exceptional written and verbal communication skills, with the ability to explain complex concepts to diverse audiences. Ability and eagerness to rapidly learn, adapt, and apply new AI tools and techniques as the field evolves. Background in software engineering, computer science, or a related technical discipline. Strong problem-solving skills accompanied by a collaborative and proactive mindset. Preferred Qualifications Experience working with remote or distributed teams across multiple time zones. Familiarity with prompt engineering and orchestration of complex AI agent pipelines. A portfolio of successfully deployed GenAI solutions in production environments. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 3 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) 15 years full time education Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) 15 years full time education Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) 15 years full time education Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As a Senior Python Engineer, you will develop data-driven applications on AWS for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. At least 7 years of experience in Python Programming with Web framework expertise (Django, Flask, or FastAPI). 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale. Additional Information: 1. The candidate should have a minimum of 7 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Bachelor of Engineering in Electronics or any related stream Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As a Mid Full Stack Engineer, you will develop data-driven applications on AWS for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. Experience in Python Programming with Web framework expertise (Django, Flask, or FastAPI). 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale Additional Information: 1. The candidate should have a minimum of 3 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Bachelor of Engineering in Electronics or any related stream Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You'll Be Doing Lead development of advanced machine learning and statistical models Design scalable data pipelines using PySpark Perform data transformation and exploratory analysis using Pandas, Numpy and SQL Build, train and fine tune machine learning and deep learning models using TensorFlow and PyTorch Mentor junior engineers and lead code reviews, best practices and documentation. Designing and implementing big data, streaming AI/ML training and prediction pipelines. Translate complex business problems into data driven solutions. Promote best practices in data science, and model governance. Stay ahead with evolving technologies and guide strategic data initiatives. What we're looking for You'll Need To Have Bachelor's degree or four or more years of work experience. Experience in Python, PySpark and SQL. Strong proficiency in Pandas, Numpy, Excel, Plotly, Matplotlib, Seaborn, ETL, AWS and Sagemaker Experience in Supervised learning models: Regression, Classification and Unsupervised learning models: Anomaly detection, clustering. Extensive experience with AWS analytics services, including Redshift, Glue, Athena, Lambda, and Kinesis. Knowledge in Deep Learning Autoencoders, CNN. RNN, LSTM, hybrid models Experience in Model evaluation, cross validation, hyper parameters tuning Familiarity with data visualization tools and techniques. Even better if you have one or more of the following: Experience with machine learning and statistical analysis. Experience in Hypothesis testing. Excellent communication skills with the ability to translate complex technical concepts to non-technical stakeholders. If our company and this role sound like a fit for you, we encourage you to apply even if you don't meet every "even better" qualification listed above. #TPDRNONCDIO Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Locations Chennai, India Bangalore, India Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Job Description This is a remote position. Position: AI/ML Engineer Intern Duration: 3 months Location: Work From Home About The Role We are seeking an experienced AI/ML Engineer to join our team building an innovative AI-powered SaaS platform. You will be responsible for developing the core AI functionalities that power our document analysis and generation, win probability prediction, and compliance checking features. This is a key role in creating a Blue Ocean product that transforms how organizations approach proposal writing and bid management. Responsibilities Design and implement NLP models for intelligent document analysis and high-quality proposal generation Develop predictive analytics models to calculate win probability scores based on historical data, competitor analysis, and proposal content Create AI algorithms for compliance checking that can identify missing requirements and suggest corrections across various regulatory frameworks (FAR, DFARS, HIPAA, IRS non-profit rules) Build machine learning models that analyse competitor strategies and pricing trends to provide actionable insights. Implement systems that allow AI to learn from past proposals, client feedback, and bid outcomes to continuously improve future recommendations. Collaborate with subject matter experts to encode domain knowledge into AI models Optimize AI/ML models for performance and accuracy Develop a learning system that improves bid strategies based on post-submission analytics and feedback Requirements Advanced degree in Computer Science, Machine Learning, AI, or equivalent practical experience Aware of developing NLP/NLG solutions for document analysis and generation Strong experience with machine learning frameworks (TensorFlow, PyTorch, Hugging Face Transformers) Practical experience implementing and fine-tuning large language models Knowledge of predictive modelling and statistical analysis for forecasting outcomes Experience with text extraction, classification, and sentiment analysis Familiarity with compliance regulations and document requirements is a plus Strong Python programming skills Experience with cloud-based AI/ML services (AWS SageMaker, Azure ML, etc.) Excellent problem-solving skills and attention to detail Advanced Technical Skills Required Extensive experience with transformer-based language models (BERT, GPT, T5) and fine-tuning techniques for domain-specific applications Demonstrated ability to implement Retrieval-Augmented Generation (RAG) architectures for grounding AI outputs in factual data Experience developing multi-task learning models that can simultaneously handle document classification, entity extraction, and text generation Proficiency with vector embeddings and semantic search for document similarity and retrieval Strong knowledge of prompt engineering techniques for controlling AI outputs within specific parameters Experience with explainable AI methods to provide transparency in decision-making processes Skills in building reinforcement learning from human feedback (RLHF) systems to improve model outputs Advanced knowledge of evaluation metrics for NLP systems (ROUGE, BLEU, BERTScore, human evaluation frameworks) Ability to implement efficient token optimization techniques for working with large documents Experience with ensemble methods for improving prediction accuracy in win probability models. Benefits Hands-on experience in a dynamic and creative work environment. Mentorship and guidance from experienced project managers. Opportunity to work on real projects that impact the company's growth and success. A certificate of completion and a letter of recommendation upon successful completion of the internship. This is an unpaid internship. A performance-based stipend will be issued based on your contribution and achievements during the internship. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Greater Bengaluru Area
On-site
What if the work you did every day could impact the lives of people you know? Or all of humanity? At Illumina, we are expanding access to genomic technology to realize health equity for billions of people around the world. Our efforts enable life-changing discoveries that are transforming human health through the early detection and diagnosis of diseases and new treatment options for patients. Working at Illumina means being part of something bigger than yourself. Every person, in every role, has the opportunity to make a difference. Surrounded by extraordinary people, inspiring leaders, and world changing projects, you will do more and become more than you ever thought possible. Position Summary We are seeking a highly skilled Senior Data Engineer Developer with 5+ years of experience to join our talented team in Bangalore. In this role, you will be responsible for designing, implementing, and optimizing data pipelines, ETL processes, and data integration solutions using Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Additionally, you will bring strong domain expertise in operations organizations, with a focus on supply chain and manufacturing functions. If you're a seasoned data engineer with a proven track record of delivering impactful data solutions in operations contexts, we want to hear from you. Responsibilities Lead the design, development, and optimization of data pipelines, ETL processes, and data integration solutions using Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Apply strong domain expertise in operations organizations, particularly in functions like supply chain and manufacturing, to understand data requirements and deliver tailored solutions. Utilize big data processing frameworks such as Apache Spark to process and analyze large volumes of operational data efficiently. Implement data transformations, aggregations, and business logic to support analytics, reporting, and operational decision-making. Leverage cloud-based data platforms such as Snowflake to store and manage structured and semi-structured operational data at scale. Utilize dbt (Data Build Tool) for data modeling, transformation, and documentation to ensure data consistency, quality, and integrity. Monitor and optimize data pipelines and ETL processes for performance, scalability, and reliability in operations contexts. Conduct data profiling, cleansing, and validation to ensure data quality and integrity across different operational data sets. Collaborate closely with cross-functional teams, including operations stakeholders, data scientists, and business analysts, to understand operational challenges and deliver actionable insights. Stay updated on emerging technologies and best practices in data engineering and operations management, contributing to continuous improvement and innovation within the organization. All listed requirements are deemed as essential functions to this position; however, business conditions may require reasonable accommodations for additional task and responsibilities. Preferred Experience/Education/Skills Bachelor's degree in Computer Science, Engineering, Operations Management, or related field. 5+ years of experience in data engineering, with proficiency in Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Strong domain expertise in operations organizations, particularly in functions like supply chain and manufacturing. Strong domain expertise in life sciences manufacturing equipment, with a deep understanding of industry-specific challenges, processes, and technologies. Experience with big data processing frameworks such as Apache Spark and cloud-based data platforms such as Snowflake. Hands-on experience with data modeling, ETL development, and data integration in operations contexts. Familiarity with dbt (Data Build Tool) for managing data transformation and modeling workflows. Familiarity with reporting and visualization tools like Tableau, Powerbi etc. Good understanding of advanced data engineering and data science practices and technologies like pypark, sagemaker, cloudera MLflow etc. Experience with SAP, SAP HANA and Teamcenter applications is a plus. Excellent problem-solving skills, analytical thinking, and attention to detail. Strong communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams and operations stakeholders. Eagerness to learn and adapt to new technologies and tools in a fast-paced environment. Illumina believes that everyone has the ability to make an impact, and we are proud to be an equal opportunity employer committed to providing employment opportunity regardless of sex, race, creed, color, gender, religion, marital status, domestic partner status, age, national origin or ancestry, physical or mental disability, medical condition, sexual orientation, pregnancy, military or veteran status, citizenship status, and genetic information. Show more Show less
Posted 3 weeks ago
6.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
Remote
About The Company Armada is an edge computing startup that provides computing infrastructure to remote areas where connectivity and cloud infrastructure is limited, as well as areas where data needs to be processed locally for real-time analytics and AI at the edge. We’re looking to bring on the most brilliant minds to help further our mission of bridging the digital divide with advanced technology infrastructure that can be rapidly deployed anywhere . About The Role We are seeking a highly motivated Senior Data Engineer to join our Data Platform team for our Edge Computing AI Platform. As a Data Engineer in our Data Platform team, you will be responsible for helping us shape the future of data ingestion, processing, and analysis, while maintaining and improving existing data systems. If you are a highly motivated individual with a passion for cutting-edge AI, cloud, edge, and infrastructure technology and are ready to take on the challenge of defining and delivering a new computing and AI platform, we would love to hear from you. Location. This role is office-based at our Trivandrum, Kerala office. What You'll Do (Key Responsibilities) Build new tools and services that support other teams’ data workflows, ingestion, processing, and distribution. Design, discuss, propose, and implement to our existing data tooling and services. Collaborate with a diverse group of people, giving and receiving feedback for growth. Execute on big opportunities and contribute to building a company culture rising to the top of the AI and Edge Computing industry. Required Qualifications 6+ years of experience in software development. Experience with data modeling, ETL/ELT processes, streaming data pipelines. Familiarity with data warehousing technologies like Databricks/Snowflake/BigQuery/Redshift and data processing platforms like Spark; working with data warehousing file formats like Avro and Parquet. Strong understanding of Storage (Object Stores, Data Virtualization) and Compute (Spark on K8S, Databricks, AWS EMR and the like) architectures used by data stack solutions and platforms. Experience with scheduler tooling like Airflow. Experience with version control systems like Git and working using a standardized git flow. Strong analytical and problem-solving skills, with the ability to work independently and collaboratively in a team environment. Professional experience developing data-heavy platforms and/or APIs. A strong understanding of distributed systems and how architectural decisions affect performance and maintainability. Bachelor’s degree in computer science, Electrical Engineering, or related field. Preferred Qualifications Experience analyzing ML algorithms that could be used to solve a given problem and ranking them by their success probability. Proficiency with a deep learning framework such as TensorFlow or Keras. Understanding of MLOps practices and practical experience with platforms like Kubeflow / Sagemaker. Compensation & Benefits For India-based candidates: We offer a competitive base salary along with equity options, providing an opportunity to share in the success and growth of Armada. You're a Great Fit if You're A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you Equal Opportunity Statement At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description EU INTech Partner Growth Experience(PGX) is seeking an Applied Scientist to lead the development of machine learning solutions for the EU Consumer Electronics business. In this role, you will push the boundaries of advanced ML techniques and collaborate closely with product and engineering teams to create innovative buying and forecasting solutions for the business. These new models will primarily benefit Smart Retail project that aims to revolutionize CPFR (Collaborative Planning, Forecasting, and Replenishment) Retail operations, driving automation, enhancing decision-making processes, and achieving scale across eligible categories such as PC, Home Entertainment or Wireless. Smart Retail solution is composed of an internal interface automating selection management mechanisms currently performed manually, followed by the creation of a vendor-facing interface on Vendor Central reducing time spent collecting required inputs. The project's key functionalities include (i) a Ranging model operating from category to product attributes level, pre-ASIN creation and when selection is substitutable, (ii) an advanced forecasting model designed for new selection and accounting cannibalization, (iii) ordering inputs optimization in line with for SCOT guideline compliance, and intelligent inventory management for sell-through tracking. Smart Retail success also depends on its integration with existing systems (SCOT) to minimize manual intervention and increase accuracy. Key job responsibilities Design, develop, and deploy advanced machine learning models to address complex, real-world challenges at scale. Build new forecasting and time-series models or enhance existing methods using scalable techniques. Partner with cross-functional teams, including product managers and engineers, to identify impactful opportunities and deliver science-driven solutions. Develop and optimize scalable ML solutions, ensuring seamless production integration and measurable impact on business metrics. Continuously enhance model performance through retraining, parameter tuning, and architecture improvements using Amazon’s extensive data resources. Lead initiatives, mentor junior scientists and engineers, and promote the adoption of ML methodologies across teams. Stay abreast of advancements in ML research, contribute to top-tier publications, and actively engage with the scientific community. Basic Qualifications PhD, or Master's degree and 3+ years of CS, CE, ML or related field experience 3+ years of building models for business application experience Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Preferred Qualifications Experience in patents or publications at top-tier peer-reviewed conferences or journals 3+ years of hands-on predictive modeling and large data analysis experience Experience working with large-scale distributed systems such as Spark, Sagemaker or similar frameworks Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka - A66 Job ID: A2873880 Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
A career within our Infrastructure practice will provide you with the opportunity to design, build, coordinate and maintain the IT environments for clients to run internal operations, collect data, monitor, develop and launch products. Infrastructure management consists of hardware, storage, compute, network and software layers. As a part of our Infrastructure Engineering team, you will be responsible for maintaining the critical IT systems which includes build, run and maintenance while providing technical support and training that aligns to industry leading practices. To really stand out and make us fit for the future in a constantly changing world, each and every one of us at PwC needs to be a purpose-led and values-driven leader at every level. To help us achieve this we have the PwC Professional; our global leadership development framework. It gives us a single set of expectations across our lines, geographies and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. Responsibilities As a Senior Associate, you'll work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Use feedback and reflection to develop self awareness, personal strengths and address development areas. Delegate to others to provide stretch opportunities, coaching them to deliver results. Demonstrate critical thinking and the ability to bring order to unstructured problems. Use a broad range of tools and techniques to extract insights from current industry or sector trends. Review your work and that of others for quality, accuracy and relevance. Know how and when to use tools available for a given situation and can explain the reasons for this choice. Seek and embrace opportunities which give exposure to different situations, environments and perspectives. Use straightforward communication, in a structured way, when influencing and connecting with others. Able to read situations and modify behavior to build quality relationships. Uphold the firm's code of ethics and business conduct. AI Engineer Overview We are seeking an exceptional AI Engineer to drive the development, optimization, and deployment of cutting-edge generative AI solutions for our clients. This role is at the forefront of applying generative models to solve real-world business challenges, requiring deep expertise in both the theoretical underpinnings and practical applications of generative AI. Core Qualifications Advanced degree (MS/PhD) in Computer Science, Machine Learning, or related field with a focus on generative models 3+ years of hands-on experience developing and deploying AI models in production environments with 1 year of experience in developing generative AI pilots, proofs of concept, and prototypes Deep understanding of state-of-the-art AI architectures (e.g., Transformers, VAEs, GANs, Diffusion Models) Expertise in PyTorch or TensorFlow, with a preference for experience in both Proficiency in Python and software engineering best practices for AI systems Technical Skills Required Demonstrated experience with large language models (LLMs) such as GPT, BERT, T5, etc. Practical understanding of generative AI frameworks (e.g., Hugging Face Transformers, OpenAI GPT, DALL-E) Familiarity with prompt engineering and few-shot learning techniques Expertise in MLOps and LLMOps practices, including CI/CD for ML models Strong knowledge of one or more cloud-based AI services (e.g., AWS SageMaker, Azure ML, Google Vertex AI) Preferred Proficiency in optimizing generative models for inference (quantization, pruning, distillation) Experience with distributed training of large-scale AI models Experience with model serving technologies (e.g., TorchServe, TensorFlow Serving, Triton Inference Server) Key Responsibilities Architect and implement end-to-end generative AI solutions, from data preparation to production deployment Develop custom AI models and fine-tune pre-trained models for specific client use cases Optimize generative models for production, balancing performance, latency, and resource utilization Design and implement efficient data pipelines for training and serving generative models Develop strategies for effective prompt engineering and few-shot learning in production systems Implement robust evaluation frameworks for generative AI outputs Collaborate with cross-functional teams to integrate generative AI capabilities into existing systems Address challenges related to bias, fairness, and ethical considerations in generative AI applications Project Delivery Lead the technical aspects of generative AI projects from pilot to production Develop proof-of-concepts and prototypes to demonstrate the potential of generative AI in solving client problems Conduct technical feasibility studies for applying generative AI to novel use cases Implement monitoring and observability solutions for deployed generative models Troubleshoot and optimize generative AI systems in production environments Client Engagement Provide expert technical guidance on generative AI capabilities and limitations to clients Collaborate with solution architects to design generative AI-powered solutions that meet client needs Present technical approaches and results to both technical and non-technical stakeholders Assist in scoping and estimating generative AI projects Innovation and Knowledge Sharing Stay at the forefront of generative AI research and industry trends Contribute to the company's intellectual property through patents or research publications Develop internal tools and frameworks to accelerate generative AI development Mentor junior team members on generative AI technologies and best practices Contribute to technical blog posts and whitepapers on generative AI applications The ideal candidate will have a proven track record of successfully deploying AI models in production environments, a deep understanding of the latest advancements in generative AI, and the ability to apply this knowledge to solve complex business problems. They should be passionate about pushing the boundaries of what's possible with generative AI and excited about the opportunity to shape the future of AI-driven solutions for our clients. Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary Ecolab’s AI Engineering organization is seeking a highly experienced Lead Data Scientist to lead the development of cutting-edge AI/ML, and data science solutions that power our commercial Digital solutions. In this hands-on role, you will be responsible for designing, building, and deploying AI-powered solutions that drive business value and innovation. The ideal candidate will have a strong background in data science, machine learning, and software development, with a proven track record of delivering high-impact projects. Key Responsibilities Design and Develop AI-Powered Solutions: Lead the design, development, and deployment of AI-powered solutions using Azure Databricks, and other components or our technology stack. Technical Leadership: Provide technical leadership and guidance to junior data scientists and engineers on Databricks, Mosaic AI, and AI agent development. Mentor and coach team members to improve their skills and expertise. Databricks and Mosaic AI Expertise: Develop and maintain expertise in Databricks and Mosaic AI, staying up-to-date with the latest features and best practices. Apply this expertise to drive the adoption of these technologies within the organization. AI Agent Development: Design and develop AI agents using Mosaic AI and other relevant technologies. Collaborate with stakeholders to identify opportunities for AI agents to drive business value. Data Science Innovation: Stay abreast of the latest advancements in data science and AI, identifying opportunities to apply new techniques and technologies to drive business innovation. Collaboration and Communication: Collaborate with stakeholders across the organization to identify business problems and develop data-driven solutions. Communicate complex technical concepts to non-technical stakeholders, driving adoption and understanding of data science solutions. Requirements Education/Experience: Degree or advanced degree in data science, physics, mathematics, statistics, computer science or related quantitative field. BS and 10+ years related experience or MS and 7+ years related experience or PhD and less than 2 years’ experience. 1-3 Years Supervisory experience preferred. Technical Skills: Proficiency in Databricks, or a similar platform like AWS SageMaker, Azure Machine Learning, Vertex AI, and others. Strong programming skills in languages such as Python, Scala, SQL, etc. Familiarity with data engineering, data warehousing, and data governance. Soft Skills Excellent communication and collaboration skills. Strong leadership and mentoring skills. Ability to drive innovation and stay up-to-date with the latest advancements in data science and AI. Nice To Have Experience with Cloud Platforms: Experience working with cloud platforms such as AWS, Azure, or Google Cloud. Experience with AI agents: Experience developing AI agents using Mosaic AI or similar code-first platforms. Certifications: Databricks or Mosaic AI certifications are a plus. Open-Source Contributions: Contributions to open-source projects related to data science, machine learning, or AI. What We Offer Opportunity to take on some of the world’s most meaningful challenges, helping customers achieve clean water, safe food, abundant energy, and healthy environments. Ability to make an impact and shape your career with a company that is passionate about your growth. The opportunity to work with the latest technologies and techniques in the data science and machine learning engineering field, acting as the subject matter expert in a growing organization within the company. Support of an organization that believes it is vital to include and engage diverse people, perspectives, and ideas to achieve our best. If you're passionate about data science, AI, and innovation, we'd love to hear from you! Please submit your resume and a cover letter explaining why you're the ideal candidate for this role. Our Commitment to Diversity and Inclusion Ecolab is committed to fair and equal treatment of associates and applicants and furthering the principles of Equal Opportunity to Employment. Our goal is to fully utilize minority, female, and disabled individuals at all levels of the workforce. We will recruit, hire, promote, transfer and provide opportunities for advancement based on individual qualifications and job performance. In all matters affecting employment, compensation, benefits, working conditions, and opportunities for advancement, Ecolab will not discriminate against any associate or applicant for employment because of race, religion, color, creed, national origin, citizenship status, sex, sexual orientation, gender identity and expressions, genetic information, marital status, age, or disability. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Chandigarh, India
On-site
Sagemaker, Python, LLM A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We’re looking for problem solvers, innovators, and dreamers who are searching for anything but business as usual. Like us, you’re a high performer who’s an expert at your craft, constantly challenging the status quo. You value inclusivity and want to join a culture that empowers you to show up as your authentic self. You know that success hinges on commitment, that our differences make us stronger, and that the finish line is always sweeter when the whole team crosses together. Senior Software Engineer We’re looking for problem solvers, innovators, and dreamers who are searching for anything but business as usual. Like us, you’re a high performer who’s an expert at your craft, constantly challenging the status quo. You value inclusivity and want to join a culture that empowers you to show up as your authentic self. You know that success hinges on commitment, that our differences make us stronger, and that the finish line is always sweeter when the whole team crosses together. Overview Why work for just any analytics company? At Alteryx, Inc., we are explorers, dreamers and innovators. We’re on a journey to build the best analytics platform in the world, but we can’t do it without people like you, leading the way. Forget the stereotypical tech companies of the past. Embrace the unconventional, exercise your imagination and help alter the future with Alteryx. Job Title: Senior Software Engineer - AI/ML Location: Bangalore (Hybrid) Department: Engineering / Data Science / AI Solutions Reports To: Engineering Manager / Technical Lead About Alteryx At Alteryx, we’re transforming the way businesses leverage data. Our AI/ML solutions empower teams to make data-driven decisions, and we’re seeking a Senior Software Engineer - AI/ML to join our engineering team in Bangalore. In this role, you’ll contribute to developing and deploying scalable AI/ML solutions, leveraging Python and React to build impactful applications. While experience with Scala is a plus, a strong understanding of AI/ML algorithms and cloud infrastructure (preferably GCP, but AWS experience is also welcome) is essential. Key Responsibilities Design, develop, and deploy scalable AI/ML models using Python and React. Collaborate with data scientists and ML engineers to integrate models into productionenvironments Build interactive and dynamic web applications using React to visualize AI/ML insights Develop and maintain data pipelines to support model training, evaluation, and deployment mplement best practices for building efficient, maintainable, and scalable machine learning solutions Design and optimize data processing systems using GCP AI/ML services (preferred) or AWS ML tools Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions Drive code quality, conduct peer code reviews, and improve system performance Research and stay updated on emerging AI/ML frameworks, libraries, and trend Required Skills & Experience 5+ years of experience as a Software Engineer or ML Engineer Strong programming skills in Python (mandatory) with experience in building scalable backend systems Hands-on experience with React for developing dynamic and interactive UI components Solid understanding of AI/ML algorithms, such as regression models, decision trees, clustering, and neural networks Multiple-LLMs, GenAI - HuggingFace, LangChain, LangGraph etc Vector databases, RAG Cloud databases, Snowflake, data warehouses, data lake etc Experience in building, training, and deploying ML models in cloud environments like GCP (preferred) or AWS Familiarity with data pipelines, ETL processes, and model serving frameworks (e.g., MLFlow, Kubeflow, or Seldon) Strong grasp of software engineering best practices such as code reviews, version control, and CI/CD pipelines Excellent problem-solving skills and the ability to work independently or collaboratively in a fast-paced environment Preferred Skills Experience with Scala for data engineering or large-scale ML pipelines Familiarity with GCP AI Platform, Vertex AI, or Amazon SageMaker Knowledge of RESTful API development and microservices architecture Understanding of containerization and orchestration tools such as Docker and Kubernetes Why Join Us? Be part of a forward-thinking team that values innovation and collaboration. Work on impactful AI/ML projects that solve real-world business challenges. Enjoy a flexible work environment with opportunities for growth and development Access to cutting-edge tools, cloud platforms, and the latest advancements in AI/ML. Find yourself checking a lot of these boxes but doubting whether you should apply? At Alteryx, we support a growth mindset for our associates through all stages of their careers. If you meet some of the requirements and you share our values, we encourage you to apply. As part of our ongoing commitment to a diverse, equitable, and inclusive workplace, we’re invested in building teams with a wide variety of backgrounds, identities, and experiences. This position involves access to software/technology that is subject to U.S. export controls. Any job offer made will be contingent upon the applicant’s capacity to serve in compliance with U.S. export controls. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Description Lead I -Software engineering Principal Developer – ML/Prompt Engineer Who We Are At UST, we help the world’s best organizations grow and succeed through transformation. Bringing together the right talent, tools, and ideas, we work with our client to co-create lasting change. Together, with over 30,000 employees in 25 countries, we build for boundless impact—touching billions of lives in the process. Visit us at . Technologies: Amazon Bedrock, RAG Models, Java, Python, C or C++, AWS Lambda, Responsibilities Responsible for developing, deploying, and maintaining a Retrieval Augmented Generation (RAG) model in Amazon Bedrock, our cloud-based platform for building and scaling generative AI applications. Design and implement a RAG model that can generate natural language responses, commands, and actions based on user queries and context, using the Anthropic Claude model as the backbone. Integrate the RAG model with Amazon Bedrock, our platform that offers a choice of high-performing foundation models from leading AI companies and Amazon via a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Optimize the RAG model for performance, scalability, and reliability, using best practices and robust engineering methodologies. Design, test, and optimize prompts to improve performance, accuracy, and alignment of large language models across diverse use cases. Develop and maintain reusable prompt templates, chains, and libraries to support scalable and consistent GenAI applications. Skills/Qualifications Experience in programming with at least one software language, such as Java, Python, or C/C++. Experience in working with generative AI tools, models, and frameworks, such as Anthropic, OpenAI, Hugging Face, TensorFlow, PyTorch, or Jupyter. Experience in working with RAG models or similar architectures, such as RAG, Ragna, or Pinecone. Experience in working with Amazon Bedrock or similar platforms, such as AWS Lambda, Amazon SageMaker, or Amazon Comprehend. Ability to design, iterate, and optimize prompts for various LLM use cases (e.g., summarization, classification, translation, Q&A, and agent workflows). Deep understanding of prompt engineering techniques (zero-shot, few-shot, chain-of-thought, etc.) and their effect on model behavior. Familiarity with prompt evaluation strategies, including manual review, automatic metrics, and A/B testing frameworks. Experience building prompt libraries, reusable templates, and structured prompt workflows for scalable GenAI applications. Ability to debug and refine prompts to improve accuracy, safety, and alignment with business objectives. Awareness of prompt injection risks and experience implementing mitigation strategies. Familiarity with prompt tuning, parameter-efficient fine-tuning (PEFT), and prompt chaining methods. Familiarity with continuous deployment and DevOps tools preferred. Experience with Git preferred Experience working in agile/scrum environments Successful track record interfacing and communicating effectively across cross-functional teams. Good communication, analytical and presentation skills, problem-solving skills and learning attitude. What We Believe We’re proud to embrace the same values that have shaped UST since the beginning. Since day one, we’ve been building enduring relationships and a culture of integrity. And today, it's those same values that are inspiring us to encourage innovation from everyone, to champion diversity and inclusion and to place people at the centre of everything we do. Humility We will listen, learn, be empathetic and help selflessly in our interactions with everyone. Humanity Through business, we will better the lives of those less fortunate than ourselves. Integrity We honour our commitments and act with responsibility in all our relationships. Equal Employment Opportunity Statement UST is an Equal Opportunity Employer. We believe that no one should be discriminated against because of their differences, such as age, disability, ethnicity, gender, gender identity and expression, religion, or sexual orientation. All employment decisions shall be made without regard to age, race, creed, colour, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. UST reserves the right to periodically redefine your roles and responsibilities based on the requirements of the organization and/or your performance. To support and promote the values of UST. Comply with all Company policies and procedures Skills Java,Python,C++ Show more Show less
Posted 3 weeks ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role: Lead/Manager Overview: We are seeking a visionary and dynamic individual to lead our AI initiatives and data-driven strategies. This role is crucial in shaping the future of our company by leveraging advanced technologies to drive innovation and growth. The ideal candidate will possess a deep understanding of AI, machine learning, and data analytics, along with a proven track record in leadership and strategic execution. Key Responsibilities: Self-Driven Initiative: Take ownership of projects and drive them to successful completion with minimal supervision, demonstrating a proactive and entrepreneurial mindset. Stakeholder Communication: Present insights, findings, and strategic recommendations to senior management and key stakeholders, fostering a data-driven decision-making culture. Executive Collaboration: Report directly to the founders and collaborate with other senior leaders to shape the company's direction and achieve our ambitious goals. Innovation & Problem-Solving: Foster a culture of innovative thinking and creative problem-solving to tackle complex challenges and drive continuous improvement. AI Research & Development: Oversee AI research and development initiatives, ensuring the integration of cutting-edge technologies and methodologies. Data Management: Ensure effective data collection, management, and analysis to support AI-driven decision-making and product development. Required Skills and Qualifications: Bachelor's degree from a Tier 1 institution or an MBA from a recognized institution. Proven experience in a managerial role, preferably in a startup environment. Strong leadership and team management skills. Excellent strategic thinking and problem-solving abilities. Exceptional communication and interpersonal skills. Ability to thrive in a fast-paced, dynamic environment. Entrepreneurial mindset with a passion for innovation and growth. Extensive experience with AI technologies, machine learning, and data analytics. Proficiency in programming languages such as Python, R, or similar. Familiarity with data visualization tools like Tableau, Power BI, or similar. Strong understanding of data governance, privacy, and security best practices. Technical Skills: Machine Learning Frameworks: Expertise in frameworks such as TensorFlow, PyTorch, or Scikit-learn. Data Processing: Proficiency in using tools like Apache Kafka, Apache Flink, or Apache Beam for real-time data processing. Database Management: Experience with SQL and NoSQL databases, including MySQL, PostgreSQL, MongoDB, or Cassandra. Big Data Technologies: Hands-on experience with Hadoop, Spark, Hive, or similar big data technologies. Cloud Computing: Strong knowledge of cloud services and infrastructure, including AWS (S3, EC2, SageMaker), Google Cloud (BigQuery, Dataflow), or Azure (Data Lake, Machine Learning). DevOps and MLOps: Familiarity with CI/CD pipelines, containerization (Docker, Kubernetes), and orchestration tools for deploying and managing machine learning models. Data Visualization: Advanced skills in data visualization tools such as Tableau, Power BI, or D3.js to create insightful and interactive dashboards. Natural Language Processing (NLP): Experience with NLP techniques and tools like NLTK, SpaCy, or BERT for text analysis and processing. Large Language Models (LLMs): Proficiency in working with LLMs such as GPT-3, GPT-4, or similar for natural language understanding and generation tasks. Computer Vision: Knowledge of computer vision technologies and libraries such as OpenCV, YOLO, or TensorFlow Object Detection API. Preferred Experience: 5-10 years of relevant experience Proven Track Record: Demonstrated success in scaling businesses or leading teams through significant growth phases, showcasing your ability to drive impactful results. AI Expertise: Deep familiarity with the latest AI tools and technologies, including Generative AI applications, with a passion for staying at the forefront of technological advancements. Startup Savvy: Hands-on experience in early-stage startups, with a proven ability to navigate the Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Category: AIML Job Type: Full Time Job Location: Bengaluru Mangalore Experience: 4-8 Years Skills: AI AWS/AZURE/GCP Azure ML C computer vision data analytics Data Modeling Data Visualization deep learning Descriptive Analytics GenAI Image processing Java LLM models ML ONNX Predictive Analytics Python R Regression/Classification Models SageMaker SQL TensorFlow Position Overview We are looking for an experienced AI/ML Engineer to join our team in Bengaluru. The ideal candidate will bring a deep understanding of machine learning, artificial intelligence, and big data technologies, with proven expertise in developing scalable AI/ML solutions. You will lead technical efforts, mentor team members, and collaborate with cross-functional teams to design, develop, and deploy cutting edge AI/ML applications. Job Details Job Category: AI/ML Engineer. Job Type: Full-Time Job Location: Bengaluru Experience Required: 4-8 Years About Us We are a multi-award-winning creative engineering company. Since 2011, we have worked with our customers as a design and technology enablement partner, guiding them on their digital transformation journeys. Roles And Responsibilities Design, develop, and deploy deep learning models for object classification, detection, and segmentation using CNNs and Transfer Learning. Implement image preprocessing and advanced computer vision pipelines. Optimize deep learning models using pruning, quantization, and ONNX for deployment on edge devices. Work with PyTorch, TensorFlow, and ONNX frameworks to develop and convert models. Accelerate model inference using GPU programming with CUDA and cuDNN. Port and test models on embedded and edge hardware platforms. ( Orin, Jetson, Hailo ) Conduct research and experiments to evaluate and integrate GenAI technologies in computer vision tasks. Explore and implement cloud-based AI workflows, particularly using AWS/Azure AI/ML services. Collaborate with cross-functional teams for data analytics, data processing, and large-scale model training. Required Skills Strong programming experience in Python. Solid background in deep learning, CNNs, and transfer learning and Machine learning basics. Expertise in object detection, classification, segmentation. Proficiency with PyTorch, TensorFlow, and ONNX. Experience with GPU acceleration (CUDA, cuDNN). Hands-on knowledge of model optimization (pruning, quantization). Experience deploying models to edge devices (e.g., Jetson, mobile, Orin, Hailo ) Understanding of image processing techniques. Familiarity with data pipelines, data preprocessing, and data analytics. Willingness to explore and contribute to Generative AI and cloud-based AI solutions. Good problem-solving and communication skills. Preferred (Nice-to-Have) Experience with C/C++. Familiarity with AWS Cloud AI/ML tools (e.g., SageMaker, Rekognition). Exposure to GenAI frameworks like OpenAI, Stable Diffusion, etc. Knowledge of real-time deployment systems and streaming analytics. Qualifications Graduation/Post-graduation in Computers, Engineering, or Statistics from a reputed institute. What We Offer Competitive salary and benefits package. Opportunity to work in a dynamic and innovative environment. Professional development and learning opportunities. Visit us on: CodeCraft Technologies LinkedIn : CodeCraft Technologies LinkedIn Instagram : CodeCraft Technologies Instagram Show more Show less
Posted 3 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We’re Hiring: Manager – AI/ML Engineering 📍 Location: Hyderabad 🕒 Employment Type: Full-Time 📅 Experience: 7+ Years Join Coschool to revolutionize learning with AI. At Coschool, we’re building a next-gen EdTech platform that’s reshaping how students learn. Our AI-first platform is designed to empower educators and support students in achieving their best. We’re now looking for a Manager – AI/ML Engineering to lead and scale our AI initiatives and shape the future of intelligent learning systems. What You’ll Do (Key Responsibilities): Lead and mentor a high-performing team of AI/ML engineers. Research, prototype, and develop robust ML models, including deep learning and GenAI solutions. Oversee the full AI/ML project lifecycle — from data preprocessing to deployment and monitoring. Guide the team in training, fine-tuning, and deploying LLMs using methods like distillation, supervised fine-tuning, and RLHF. Collaborate with product managers, engineers, and leadership to translate business problems into ML solutions. Ensure the scalability, reliability, and security of AI systems in production. Define and enforce best practices in MLOps, model management, and experimentation. Stay on the cutting edge of AI by exploring and applying emerging trends and technologies. You’re a Great Fit If You Have: 7+ years of experience in building AI/ML solutions, with at least 2 years in a leadership or managerial role. Strong foundation in machine learning, deep learning, and computer science fundamentals. Hands-on experience deploying AI models in production using frameworks like PyTorch, TensorFlow, and Scikit-learn. Proficiency with cloud ML platforms (AWS SageMaker, Google AI, Azure ML). Solid understanding of MLOps tools and practices for model lifecycle management. Experience with LLM fine-tuning and training methodologies. Excellent problem-solving, communication, and people management skills. A proactive mindset and passion for innovation and mentorship. Preferred Skills: Experience working with generative AI frameworks. A portfolio or track record of creative AI/ML applications. Familiarity with tools for LLM orchestration and retrieval-augmented generation (RAG). Why Coschool? Real Impact: Build solutions that directly affect the lives of millions of students. Autonomy: Enjoy the freedom to innovate and execute with a clear mission in mind. Growth: Work in a fast-paced, learning-focused environment with top talent. Vision: Be part of a purpose-driven company that combines decades of educational excellence with cutting-edge AI. Show more Show less
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Position: AI/ML Engineer (Python AWS REST APIs) Experience 3 to 5 Years Location: Indore Work from office Job Summary We are seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality of an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Responsibilities AI/ML Development Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for: Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience: Must-Have 35 years of experience in AI/ML (especially vision-based systems). Strong experience with PyTorch or TensorFlow for model development. Proficient in Python with experience building RESTful APIs. Hands-on experience with Amazon SageMaker, Lambda, API Gateway, and S3. Knowledge of OpenSSL/PyCryptodome or basic cryptographic concepts. Understanding of model deployment, serialization, and performance tuning. Nice-to-Have Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies: Python, PyTorch, TensorFlow FastAPI, Flask AWS: SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL Note: For I-VDES Project. Excellent communication and interpersonal skills Ability to work with tight deadlines Kindly share your resume on hr@advantal.net Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer & Community Banking, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Design and implement highly scalable and reliable data processing pipelines and deploy model inference services. Deploy solutions into public cloud (AWS or Azure) infrastructure. Experiment, develop and productionize high quality machine learning models, services, and platforms to make a huge technology and business impact. Write code to create several machine learning experimentation pipelines. Design and implement feature engineering pipelines and push them to feature stores. Analyze large datasets to extract actionable insights and drive data-driven decision-making. Ensure the scalability and reliability of AI/ML solutions in a production environment. Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Proficient in coding in Javascript, ReactJS, HTML and CSS. Proven experience as a front-end developer with a strong focus on ReactJS and Typescript. Technical feasibility of UI/UX designs and optimize applications for maximum speed and scalability Proficiency in programming languages such as Python, Java etc. Full-stack experience API development, including JavaScript frameworks such as React, would be highly valuable Experience in using GenAI (OpenAI or AWS Bedrock) to solve business problem. Experience with large scale training, validation and testing Experience and skills in training and deploying ML models on AWS SageMaker or Bedrock Experience in machine learning frameworks such as TensorFlow, PyTorch, Pytorch Keras, or Scikit-learn. Familiarity with cloud platforms (AWS) and containerization technologies (Docker, Kubernetes, Amazon EKS, ECS). Preferred Qualifications, Capabilities, And Skills Expert in at least one of the following areas: Natural Language Processing, Reinforcement Learning, Ranking and Recommendation, or Time Series Analysis. Knowledge of machine learning frameworks: Pytorch, Keras, MXNet, Scikit-Learn, as well as LLM frameworks, such as LangChain, LangGraph, etc Understanding of finance or investment banking businesses is an added advantage ABOUT US Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Sagemaker is a rapidly growing field in India, with many companies looking to hire professionals with expertise in this area. Whether you are a seasoned professional or a newcomer to the tech industry, there are plenty of opportunities waiting for you in the sagemaker job market.
If you are looking to land a sagemaker job in India, here are the top 5 cities where companies are actively hiring for roles in this field:
The salary range for sagemaker professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
In the sagemaker field, a typical career progression may look like this:
In addition to expertise in sagemaker, professionals in this field are often expected to have knowledge of the following skills:
Here are 25 interview questions that you may encounter when applying for sagemaker roles, categorized by difficulty level:
What is a SageMaker notebook instance?
Medium:
What is the difference between SageMaker Ground Truth and SageMaker Processing?
Advanced:
As you explore opportunities in the sagemaker job market in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and mindset, you can land your dream job in this exciting and evolving field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2