Be a part of Inforizon We believe in the strength of people and the collective effort of skilled individuals. Inforizon is all about our talented and aspirational team. We have a well defined and scalable organizational structure with policies that can support the future growth of people. Being a part of the Inforizon family, you will be encouraged to upgrade your existing skills and to actively participate in cultural events. We welcome individuals who want to be in an enriching workspace with great opportunities and scalable future growth. Back Up & Storage Administrator Job ID: INFIT002 Back Up & Storage Administrator If you are looking for a great opportunity and a place where your ideas are truly valued, and your input genuinely makes a difference, join us now and explore our opportunities today. This is a fantastic opportunity to join a fast-growing company with high quality focus, energized by global collaboration, infinite opportunities and good offerings for a career development. As an Operations Specialist you will be part a very professional team that handles monitoring, maintenance and operations of our Backup Infrastructure. Our everyday work is characterized by an open and direct atmosphere, and we put emphasis on creating a challenging yet fun working environment. We pay significant attention to job satisfaction, and personal and professional development. Your primary responsibilities will be: Participate in the daily operations of incident, service and change request management according to company standards. Participate in the monitoring and first line troubleshooting of client’s systems Involved in consolidation, automation and performance optimization Documentation of operations and services Researching methods, tools and solutions, and providing echo training to team members. Develop automation scripts and tools for routine tasks. Preparation of weekly and monthly reports The team works closely together with our Backup teams in Denmark and Czech, responsible for everything from infrastructure and applications to complete orchestration and support of our customers’ IT solutions. Our work processes are based on best practice and our procedures follow ITIL standards (IT Infrastructure Library). Requirements: Relevant bachelor's degree and at least (5) years in the IT industry, within a service-oriented environment dealing with system and backup operations. Experience in enterprise-level backup and recovery solutions. (Commvault). Experience in operating Tape Libraries (IBM and Spectra logic). Good documentation, communication and analytical skills Working knowledge in Network & VMWare. Experience in operating and troubleshooting on Windows & Unix platform. Adept in troubleshooting Windows/Unix and application errors. Can create short scripts to automate any routine work Detail-oriented & Process oriented. Don't assume, ask for clarifications. Deadline-Oriented. Has a sense of urgency Open and Honest. Speaks his mind and gives sensible opinion and suggestion You have strong competencies in relation to incident handling and troubleshooting, and you have a structured and analytical approach. Focused on Quality and Improvement Well-organized and manage time and tasks properly Willing to work on shifts Good team player The following areas of experience are a plus: Backup Exec TSM Proxmox Maintenance and operations of Data Domain, HPE storeOnce experience is a plus Or email us recruiter@inforizonhr.com your Resume for the openings in future. Benefits Here are some of the perks of being a part of Inforizon Better Salary W offer competitive salaries to keep our employees motivated and keep a sense of unity within our company. Flexibility Inforizon offer flexible work hours or remote work options, allowing employees to achieve a better work-life balance. Career Growth Inforizon invest in our employees' professional growth by providing training, education, and career opportunities. Recognition We offer Performance bonuses and extra financial rewards based on individual and teams achievements. Technology We help up-to-date tools and technology needed for the job. Equipment and software to facilitate remote work.
As Senior Cloud Engineer, your responsibilities will include, but are not limited to: Responsible for any technological duties associated with cloud computing, including planning, management, maintenance, and support. Operation/project tasks or service development/management Operation of customer environment following ITIL/Agile/DevOps/IaC Planning, driving and execution of project activities. Service life cycle management. Briefly Describe Yourself How you know about us? Linked In Naukri.com Indeed.com Upload your resume (.pdf 5MB Max) LinkedIn or Github (Optional) Back Up Storage Administrator If you are looking for a great opportunity and a place where your ideas are truly valued, and your input genuinely makes a difference, join us now and explore our opportunities today. This is a fantastic opportunity to join a fast-growing company with high quality focus, energized by global collaboration, infinite opportunities and good offerings for a career development. As an Operations Specialist you will be part a very professional team that handles monitoring, maintenance and operations of our Backup Infrastructure. Our everyday work is characterized by an open and direct atmosphere, and we put emphasis on creating a challenging yet fun working environment. We pay significant attention to job satisfaction, and personal and professional development. Your primary responsibilities will be: Participate in the daily operations of incident, service and change request management according to company standards. Participate in the monitoring and first line troubleshooting of clients systems Involved in consolidation, automation and performance optimization Documentation of operations and services Researching methods, tools and solutions, and providing echo training to team members. Develop automation scripts and tools for routine tasks. Preparation of weekly and monthly reports The team works closely together with our Backup teams in Denmark and Czech, responsible for everything from infrastructure and applications to complete orchestration and support of our customers IT solutions. Our work processes are based on best practice and our procedures follow ITIL standards (IT Infrastructure Library). Requirements: Relevant bachelors degree and at least (5) years in the IT industry, within a service-oriented environment dealing with system and backup operations. Experience in enterprise-level backup and recovery solutions. (Commvault). Experience in operating Tape Libraries (IBM and Spectra logic). Good documentation, communication and analytical skills Working knowledge in Network VMWare. Experience in operating and troubleshooting on Windows Unix platform. Adept in troubleshooting Windows/Unix and application errors. Can create short scripts to automate any routine work Detail-oriented Process oriented. Dont assume, ask for clarifications. Deadline-Oriented. Has a sense of urgency Open and Honest. Speaks his mind and gives sensible opinion and suggestion You have strong competencies in relation to incident handling and troubleshooting, and you have a structured and analytical approach. Focused on Quality and Improvement Well-organized and manage time and tasks properly Willing to work on shifts Good team player The following areas of experience are a plus: Backup Exec TSM Proxmox Maintenance and operations of Data Domain, HPE storeOnce experience is a plus Linked In Naukri.com Indeed.com Job ID: INFIT003 We are looking for an experienced Full-Stack Engineer with at least 6 years of hands-on experience in building scalable applications. The ideal candidate should have expertise in AWS services, modern frontend and backend technologies, and DevOps practices. This is a fully remote role with flexible working hours. Key Responsibilities Design, develop, and deploy scalable applications using AWS services. Implement infrastructure as code using Terraform . Develop backend services and APIs using Python . Build dynamic user interfaces with React / Vite . Write and maintain unit tests using Jest and Pytest . Ensure smooth deployment and CI/CD pipeline integration. Work collaboratively in an agile environment, following SDLC best practices. Troubleshoot, debug, and optimize applications for high performance. Required Skills AWS Services : Lambda, Glue, S3, DynamoDB, EventBridge, AppSync, OpenSearch Infrastructure as Code Backend Development Frontend Development Software Development Lifecycle Nice to Have clinical applications healthcare software Knowledge of GxP compliance and implementation of compliant applications. Linked In Naukri.com Indeed.com Job Title:
6+ Years of hands on experience in designing and developing enterprise-level Java applications, including web applications, APIs, and microservices. Expertise in Java frameworks like Spring, Hibernate and familiarity with web technologies and RESTful APIs. Experience with version control systems (Git) Strong problem-solving skills, analytical thinking, and the ability to architect scalable and performant Java applications that meet business requirements. Proficiency in applying design patterns to architect enterprise-level Java applications and microservices. Proficiency in analyzing business requirements, user stories, and technical specifications to design and deliver microservices solutions that meet project objectives. Experience with database technologies Oracle, or PostgreSQL, and proficiency in SQL queries and database design. Excellent communication and leadership skills, with the ability to work effectively in a collaborative team environment and lead technical initiatives
Full Stack Developer - 4+ years Work Mode: Hybrid Location: Manyata Tech Park, Bangalore Primary Responsibilities: Work with business analysts to estimate and design effective, scalable and maintainable solutions that meet business initiatives and objectives Develop and unit test software that meets business requirements and technical design Troubleshoot pre- and post-production implementations Propose new ideas when there is strong business value and stay up to date on the latest technology trends and techniques Role Responsibilities: 4+ Years of experience as a full stack developer who worked on both Frontend Technologies and backend Technologies. 3+ Experience in Frontend technologies like ReactJS, Node JS, HTML and CSS. 3+ years of experience developing in Java technologies with equivalent experience in systems analysis, OO design, OO programming and debugging skills. 3+ Experience in technology stack including Spring Boot, Spring Cloud, Gradle, Microservices Architecture, REST, Java 1.8, Spark Experience No SQL(Cassandra)/ in SQL(Oracle) Experience with build and deployment using Git/Stash/Jenkins etc. eCommerce experience (Retail) nice to have Experience working with Agile/scrum teams How you know about us? Upload your resume (.pdf 5MB Max)
Inforizon is looking for Python Developer to join our dynamic team and embark on a rewarding career journey A Python Developer is a software developer who specializes in using the Python programming language to build applications, software tools, and data analysis systems The job description for a Python Developer typically includes the following responsibilities: Writing and Testing Code: The Python Developer is responsible for writing clean, maintainable, and efficient Python code, as well as testing and debugging code to ensure that it meets quality standards Designing and Developing Applications: The Python Developer designs and develops applications, software tools, and data analysis systems using Python frameworks and libraries Developing and Maintaining APIs: The Python Developer creates and maintains RESTful APIs that enable seamless integration with other systems and applications Analyzing and Manipulating Data: The Python Developer uses Python libraries and tools to analyze and manipulate data, including data cleaning, transformation, and visualization Requirements: Strong proficiency in Python, including knowledge of Python frameworks such as Django, Flask, and Pyramid Experience in software development, including writing and testing code, designing and developing applications, and collaborating with cross-functional teams Knowledge of front-end technologies such as HTML, CSS, and JavaScript is a plus Strong analytical and problem-solving skills Experience working in Agile and/or Scrum methodologies Familiarity with database systems such as MySQL, PostgreSQL, and MongoDB Excellent communication and collaboration skills
Java Developer - Bengaluru (Hybrid) Job ID: INFIT001 Java Developer(Hybrid) Location : Bangalore Experience: 5-7 years Notice: Immediate to 15 days Work Location: Bengaluru (Hybrid) Passport is mandatory for this role. Basic qualifications: Bachelors degree in Computer science or related field 5-8 years professional experience in software development; you will be able to discuss in depth both the design and your significant contributions to one or more projects Solid understanding of computer science fundamentals: data structure, algorithm, distributed system design, database, and design patterns. Strong coding skills with a modern language (Java, SprintBoot etc) Experience working in an Agile/Scrum environment REST, PostgreSQL, MongoDB, Redis, KAFKA Preferred qualifications: Experience with Warehouse management system, distributed system performance analysis and optimization. Strong communications skills; you will be required to proactively engage colleagues both inside and outside of your team. Ability to effectively articulate technical challenges and solutions. Apply Now 1 2 Name E-Mail Job ID Phone Briefly Describe Yourself How you know about us? Linked In Naukri.com Indeed.com Next Upload your resume (.pdf 5MB Max) LinkedIn or Github (Optional) Captcha Back Send
Job Title: WLAN Developer/ - WLAN QA Automation Role - WLAN Developer Experience : 3+ Years Designation : Engineer (SW) - Manager (SW) Location : Chennai / Pollachi/ Bangalore/ Manesar/ Pune/ Noida/ Kochi WLAN Linux Device Driver development (802.11ax/be preferred) • WLAN Debugging • WLAN Firmware development (802.11ac/n/ax/be) • Networking/Wireless Domain knowledge • Hostapd, Supplicant • Cross compiling/porting experience • Performance optimization & Firmware enhancements Role - WLAN QA Automation Experience : 3+ Years Designation : Engineer (SW) - Manager (SW) Location : Chennai / Pollachi/ Bangalore/ Manesar/ ((Pune - Client Round after internal rounds)) JD: WLAN QA Automation Experience as a Python Developer, with expertise in test automation using Pytest/PyATS/Robot Framework. • Strong knowledge on Flask framework and API automation. • Knowledge in Version control systems (e.g., Git). • Strong Experience in L2/L3 Networking / Wireless protocols
Proven experience as a Data Scientist or similar role with relevant experience of at least 4 years and total experience 6-8 years. Technical expertiseregarding data models, database design development, data mining and segmentation techniques Strong knowledge of and experience with reporting packages (Business Objects and likewise), databases, programming in ETL frameworks Experience with data movement and management in the Cloud utilizing a combination of Azure or AWS features Hands on experience in data visualization tools Power BI preferred Solid understanding of machine learning Knowledge of data management and visualization techniques A knack for statistical analysis and predictive modeling Good knowledge of Python and Matlab Experience with SQL and NoSQL databases including ability to write complex queries and procedures Location : HSR layout - Banglore
Location: Bengaluru(On-site) - Mandatory: Experience in Product-based companies Type: Full-time About the Role We're looking for a backend-focused developer who's just starting out but already thinks critically about system design, clean code, and long-term scalability. You'll be part of a lean, fast-moving team, responsible for building and maintaining the APls and backend services that power our platform. This is a high-ownership role ideal for someone who thrives in a fast-paced startup environment and is eager to grow into a world-class engineer. What You'll Do Develop and maintain scalable backend services and APIs Work with MongoDB and other data stores (NOSQL/SQL) Optimize performance and reliability at scale Contribute to code reviews and system architecture decisions Collaborate with the tech team in short feedback loops to ship, learn, and iterate quickly What We're Looking For 1.5 - 4 Years of backend development experience Proficiency in Python (preferably Django or Flask) Understanding of API design and backend architecture principles Familiarity with MongoDB or similar databases Interest in cloud platforms (AWS), CI/CD workflows, and deployment tooling Obsession with clean, maintainable code and modular design A "tech purist" mindset you enjoy building things the right way, not just the fastest way Exposure to early-stage startups, open-source contributions, or serious side projects Bachelor's degree in CS/Engineering or equivalent self-taught experience Why Join Us Collaborate closely with a sharp, hands-on founding team Tackle meaningful problems in learning, careers, and education Shape critical backend systems from the ground up Fast growth and high autonomy in a builder-first environment
Job Description: Senior AWS Solutions/AI Architect (Agentic AI & Cloud Solutions) ====================================================== Responsibilities ------------------ - Lead architecture design for agentic AI systems leveraging AWS services (Bedrock, SageMaker, Lambda, Step Functions, EventBridge, OpenSearch, DynamoDB). - Define multi-agent orchestration strategies for clinical data workflows, source verification, and discrepancy resolution. - Evaluate trade-offs between LLM-based reasoning, rules engines, and hybrid AI architectures. - Experience in healthcare/Life Sciences AI solutions with regulatory compliance preferred - Mentor AI engineers, developers, and DevOps teams on agent-based architectures. - Collaborate with clinical SMEs, compliance teams, and sponsors to align technical design with business outcomes. Qualifications ------------------ - Bachelors/Masters in Computer Science, AI, or related field (PhD a plus). - 8–12 years in AI/ML engineering with 3–5 years in cloud-native AI architecture. - Hands-on experience with AWS AI stack (Bedrock, SageMaker, Comprehend Medical, Textract, Kendra, Lex). - Strong understanding of agentic AI frameworks (LangChain, LlamaIndex, Haystack, CrewAI, AutoGen). - Proven expertise in workflow orchestration (Step Functions, Airflow, Temporal) and multi-agent pipelines. - Experience in healthcare/Life Sciences AI solutions with regulatory compliance. - Strong leadership, stakeholder communication, and solution governance skills. Job Description: AWS AI Engineer / Developer (Agentic AI Implementation) ======================================================== Responsibilities ---------------- - Implement agentic AI workflows for clinical source verification, discrepancy detection, and intelligent query generation. - Build and integrate LLM-powered agents using AWS Bedrock + open-source frameworks (LangChain, AutoGen). - Develop event-driven pipelines with AWS Lambda, Step Functions, and EventBridge. - Optimize prompt engineering, retrieval-augmented generation (RAG), and multi-agent communication. - Integrate AI agents with external systems through secure APIs. - Experience in healthcare/Life Sciences AI solutions with regulatory compliance preferred - Collaborate with data engineers for PHI/PII-safe ingestion pipelines. - Monitor, test, and fine-tune AI workflows for accuracy, latency, and compliance. Qualifications ---------------- - Bachelor’s in Computer Science, Engineering, or related field. - 3–6 years in AI/ML engineering with hands-on LLM/agentic AI development. - Strong coding skills in Python/TypeScript and experience with LangChain, LlamaIndex, or AutoGen. - Familiarity with AWS AI services (Bedrock, SageMaker, Textract, Comprehend Medical). - Experience in API integrations and event-driven architectures. - Experience in healthcare/Life Sciences AI solutions with regulatory compliance preferred - Problem-solving mindset with ability to experiment and iterate quickly. Job Description: Machine Learning Engineer (Agentic AI – AWS) - Key Responsibilities Design, develop, and deploy ML models for Agentic AI use cases. Work with AWS AI/ML ecosystem (SageMaker, Bedrock, Lambda, Step Functions, S3, DynamoDB, Kinesis). Preprocess and engineer features from structured, unstructured, and streaming data. Collaborate with data engineers to ensure high-quality, well-curated training datasets. Implement LLM fine-tuning, embeddings, and retrieval-augmented generation (RAG) pipelines. Evaluate and optimize models for accuracy, performance, scalability, and cost-efficiency . Integrate models into production applications and APIs. Work with MLOps teams to automate training, testing, deployment, and monitoring workflows. Perform experimentation, A/B testing, and model validation to ensure reliability. Document experiments, pipelines, and best practices for reproducibility. Required Skills & Qualifications 3–6 years of experience in ML engineering (adjust based on seniority). Strong programming skills in Python (NumPy, Pandas, Scikit-learn, PyTorch, TensorFlow). Solid understanding of ML lifecycle (data preprocessing, training, evaluation, deployment). Experience with AWS services for ML (SageMaker, Lambda, ECS/EKS, Step Functions, Bedrock). Familiarity with large language models (LLMs), NLP, and embeddings . Strong knowledge of APIs and microservice deployment . Experience with ML pipeline orchestration (Airflow, Kubeflow, MLflow, or similar). Understanding of data versioning, experiment tracking, and model registry . Proficiency in SQL/NoSQL databases and vector databases (Weaviate, Pinecone, FAISS).
Job Description: QA Engineer (Agentic AI AWS) Key Responsibilities Develop and execute test plans, test cases, and automation scripts for AI-powered applications on AWS. Test data pipelines (ETL/ELT, streaming, batch) for correctness, completeness, and performance. Validate AI/ML workflows (model training, inference, fine-tuning, and retraining pipelines). Perform functional, regression, integration, performance, and security testing of cloud-based applications. Create automated test frameworks for APIs, microservices, and AWS Lambda functions . Implement continuous testing practices within CI/CD pipelines. Collaborate with data engineers, ML engineers, and solution architects to ensure end-to-end system quality. Monitor production systems and validate results against expected business/AI outcomes . Document test results, defects, and provide feedback for process improvements. Required Skills & Qualifications 3-7 years of QA/testing experience (depending on seniority level). Strong knowledge of manual and automated testing practices. Strong skills in Python/Java/JavaScript for automation. Experience testing cloud-native applications on AWS (S3, Lambda, Step Functions, API Gateway, ECS, DynamoDB, etc.). Familiarity with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD, or CodePipeline). Understanding of data validation and data quality frameworks (Great Expectations, Deequ, etc.). Job Description: Data Engineer (Agentic AI AWS) Key Responsibilities Design, build, and maintain ETL/ELT pipelines for structured and unstructured data in AWS. Work with AWS services (e.g., S3, Glue, EMR, Lambda, Kinesis, Step Functions, Redshift, Athena, DynamoDB, RDS) to manage data ingestion, transformation, and storage. Ensure data quality, governance, and security across the full lifecycle. Enable real-time and batch data processing to support AI-driven workflows. Collaborate with AI/ML teams to prepare datasets for model training, inference, and fine-tuning. Optimize data infrastructure for scalability, cost-efficiency, and performance . Implement CI/CD pipelines and best practices for data engineering in a cloud-native environment. Monitor and troubleshoot data pipelines to ensure high availability and reliability. Required Skills & Qualifications 37 years of experience in data engineering (adjust based on seniority). Strong programming skills in Python, SQL, and/or Scala/Java . Expertise in AWS cloud ecosystem (S3, Glue, EMR, Redshift, Athena, Kinesis, Lambda, etc.). Experience with data pipeline orchestration tools (Airflow, Step Functions, Dagster, or similar). Proficiency with big data frameworks (Spark, Hadoop, or Flink). Familiarity with data modeling, warehousing, and schema design . Solid understanding of data governance, lineage, and security (IAM, Lake Formation, encryption) . Experience with real-time streaming data (Kafka, Kinesis, or equivalent). Knowledge of DevOps practices (Terraform/CloudFormation, CI/CD, Git, Docker)
Duration: 3 months (May extend to longer term) Work Hours: 12:30 to 9:30 IST Requirement: Should be very good in written and oral communication and should be strong in technology and tools Job Description: Senior AWS Solutions/AI Architect (Agentic AI & Cloud Solutions) ====================================================== Responsibilities ------------------ - Lead architecture design for agentic AI systems leveraging AWS services (Bedrock, SageMaker, Lambda, Step Functions, EventBridge, OpenSearch, DynamoDB). - Define multi-agent orchestration strategies for clinical data workflows, source verification, and discrepancy resolution. - Evaluate trade-offs between LLM-based reasoning, rules engines, and hybrid AI architectures. - Experience in healthcare/Life Sciences AI solutions with regulatory compliance preferred - Mentor AI engineers, developers, and DevOps teams on agent-based architectures. - Collaborate with clinical SMEs, compliance teams, and sponsors to align technical design with business outcomes. Qualifications ------------------ - Bachelors/Masters in Computer Science, AI, or related field (PhD a plus). - 812 years in AI/ML engineering with 3–5 years in cloud-native AI architecture. - Hands-on experience with AWS AI stack (Bedrock, SageMaker, Comprehend Medical, Textract, Kendra, Lex). - Strong understanding of agentic AI frameworks (LangChain, LlamaIndex, Haystack, CrewAI, AutoGen). - Proven expertise in workflow orchestration (Step Functions, Airflow, Temporal) and multi-agent pipelines. - Experience in healthcare/Life Sciences AI solutions with regulatory compliance. - Strong leadership, stakeholder communication, and solution governance skills. Job Description: AWS AI Engineer / Developer (Agentic AI Implementation) ======================================================== Responsibilities ---------------- - Implement agentic AI workflows for clinical source verification, discrepancy detection, and intelligent query generation. - Build and integrate LLM-powered agents using AWS Bedrock + open-source frameworks (LangChain, AutoGen). - Develop event-driven pipelines with AWS Lambda, Step Functions, and EventBridge. - Optimize prompt engineering, retrieval-augmented generation (RAG), and multi-agent communication. - Integrate AI agents with external systems through secure APIs. - Experience in healthcare/Life Sciences AI solutions with regulatory compliance preferred - Collaborate with data engineers for PHI/PII-safe ingestion pipelines. - Monitor, test, and fine-tune AI workflows for accuracy, latency, and compliance. Qualifications ---------------- - Bachelor’s in Computer Science, Engineering, or related field. - 3–6 years in AI/ML engineering with hands-on LLM/agentic AI development. - Strong coding skills in Python/TypeScript and experience with LangChain, LlamaIndex, or AutoGen. - Familiarity with AWS AI services (Bedrock, SageMaker, Textract, Comprehend Medical). - Experience in API integrations and event-driven architectures. - Experience in healthcare/Life Sciences AI solutions with regulatory compliance preferred - Problem-solving mindset with ability to experiment and iterate quickly. Job Description: Machine Learning Engineer (Agentic AI – AWS) - Key Responsibilities Design, develop, and deploy ML models for Agentic AI use cases. Work with AWS AI/ML ecosystem (SageMaker, Bedrock, Lambda, Step Functions, S3, DynamoDB, Kinesis). Preprocess and engineer features from structured, unstructured, and streaming data. Collaborate with data engineers to ensure high-quality, well-curated training datasets. Implement LLM fine-tuning, embeddings, and retrieval-augmented generation (RAG) pipelines. Evaluate and optimize models for accuracy, performance, scalability, and cost-efficiency . Integrate models into production applications and APIs. Work with MLOps teams to automate training, testing, deployment, and monitoring workflows. Perform experimentation, A/B testing, and model validation to ensure reliability. Document experiments, pipelines, and best practices for reproducibility. Required Skills & Qualifications 3–6 years of experience in ML engineering (adjust based on seniority). Strong programming skills in Python (NumPy, Pandas, Scikit-learn, PyTorch, TensorFlow). Solid understanding of ML lifecycle (data preprocessing, training, evaluation, deployment). Experience with AWS services for ML (SageMaker, Lambda, ECS/EKS, Step Functions, Bedrock). Familiarity with large language models (LLMs), NLP, and embeddings . Strong knowledge of APIs and microservice deployment . Experience with ML pipeline orchestration (Airflow, Kubeflow, MLflow, or similar). Understanding of data versioning, experiment tracking, and model registry . Proficiency in SQL/NoSQL databases and vector databases (Weaviate, Pinecone, FAISS).
Duration: 3 months (May extend to longer term) Work Hours: 12:30 to 9:30 IST Requirement: Should be very good in written and oral communication and should be strong in technology and tools Job Description: QA Engineer (Agentic AI AWS) Key Responsibilities Develop and execute test plans, test cases, and automation scripts for AI-powered applications on AWS. Test data pipelines (ETL/ELT, streaming, batch) for correctness, completeness, and performance. Validate AI/ML workflows (model training, inference, fine-tuning, and retraining pipelines). Perform functional, regression, integration, performance, and security testing of cloud-based applications. Create automated test frameworks for APIs, microservices, and AWS Lambda functions . Implement continuous testing practices within CI/CD pipelines. Collaborate with data engineers, ML engineers, and solution architects to ensure end-to-end system quality. Monitor production systems and validate results against expected business/AI outcomes . Document test results, defects, and provide feedback for process improvements. Required Skills & Qualifications 3-7 years of QA/testing experience (depending on seniority level). Strong knowledge of manual and automated testing practices. Strong skills in Python/Java/JavaScript for automation. Experience testing cloud-native applications on AWS (S3, Lambda, Step Functions, API Gateway, ECS, DynamoDB, etc.). Familiarity with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD, or CodePipeline). Understanding of data validation and data quality frameworks (Great Expectations, Deequ, etc.). Job Description: Data Engineer (Agentic AI AWS) Key Responsibilities Design, build, and maintain ETL/ELT pipelines for structured and unstructured data in AWS. Work with AWS services (e.g., S3, Glue, EMR, Lambda, Kinesis, Step Functions, Redshift, Athena, DynamoDB, RDS) to manage data ingestion, transformation, and storage. Ensure data quality, governance, and security across the full lifecycle. Enable real-time and batch data processing to support AI-driven workflows. Collaborate with AI/ML teams to prepare datasets for model training, inference, and fine-tuning. Optimize data infrastructure for scalability, cost-efficiency, and performance . Implement CI/CD pipelines and best practices for data engineering in a cloud-native environment. Monitor and troubleshoot data pipelines to ensure high availability and reliability. Required Skills & Qualifications 37 years of experience in data engineering (adjust based on seniority). Strong programming skills in Python, SQL, and/or Scala/Java . Expertise in AWS cloud ecosystem (S3, Glue, EMR, Redshift, Athena, Kinesis, Lambda, etc.). Experience with data pipeline orchestration tools (Airflow, Step Functions, Dagster, or similar). Proficiency with big data frameworks (Spark, Hadoop, or Flink). Familiarity with data modeling, warehousing, and schema design . Solid understanding of data governance, lineage, and security (IAM, Lake Formation, encryption) . Experience with real-time streaming data (Kafka, Kinesis, or equivalent). Knowledge of DevOps practices (Terraform/CloudFormation, CI/CD, Git, Docker)
FIND ON MAP