Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
5 - 15 Lacs
Kochi
Work from Office
Require solid automation knowledge and have hands on automation working experience in Client project. Solid experience around test case automation practices within a complex product environment, understanding how and when to use programming skills to solve complex testing problems. Should have C# as their dominant scripting language, and with experience for C# for test automation. Analyze current processes to identify areas for improvement through automation. Preferred knowledge on leveraging UST QE360 framework. Sound experience in C# script and Rest API automation. Knowledge of leveraging automation test frameworks and testing strategies Good knowledge of executing automation scripts through CI/CD Azure pipelines Strong problem-solving skills and attention to detail Excellent communication and teamwork abilities. Required Skills Test Automation,C#,Rest Api
Posted 2 weeks ago
3.0 - 8.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Our vision for the future is based on the idea that transforming financial lives starts by giving our people the freedom to transform their own. We have a flexible work environment, and fluid career paths. We not only encourage but celebrate internal mobility. We also recognize the importance of purpose, well-being, and work-life balance. Within Empower and our communities, we work hard to create a welcoming and inclusive environment, and our associates dedicate thousands of hours to volunteering for causes that matter most to them. Chart your own path and grow your career while helping more customers achieve financial freedom. Empower Yourself. Empower is currently recruiting an SAP Senior Configuration Analyst for SAP Business Support team. Viewed as proficient in SAP system implementation and support, this position presents an opportunity to an experienced SAP Business and Configuration professional to take the next steps in their career. Financial Services Collection & Disbursement-Application Lead as part of our established SAP team, under general direction the successful candidate will design, document, implement and support various complex SAP applications. Duties & Responsibilities: Independently work on new projects and support the existing system. Proficiently communicate with business users to gather requirements and perform GAP analysis. Analyze system and business needs to effectively map business processes to SAP application modules. Design system solutions, write functional specification document for various RICEFW objects, perform functional configurations, perform unit testing, coordinate integration testing, support user acceptance testing, and resolve issues reported during hyper-care. Efficiently coordinate with other technical teams (ABAP, Security, Basis, Batch support). Troubleshoot reported system issues and resolve the issues as per priority levels. Train and mentor less experienced SAP analysts. Act as a resource for colleagues with less experience. Complete moderately complex project tasks within defined milestones. Make recommendations for project resource requirements to project managers and/or systems leadership. Maintains requirements documentation, project tracking, and key stakeholder reporting metrics. May lead or direct projects of smaller scope. Work as on-call support (on rotation basis). Suggest process and system improvements. Qualifications: Bachelor’s degree in computer science, Information Systems or Business or equivalent experience. Minimum of 5 years relevant SAP experience. Hands on configuration experience required and required for this position. Experience in implementation of new SAP enhancements and conversions. Experience in setting up SAP banking, treasury and cash management configuration and testing. Experience should include the setup, testing and maintenance of house banks, and the integration of SAP with banks – NACHA files, check formats, and EDI and BAI file formats. The primary function of this role is to support the ongoing FS-CD support and implementation in the system. Serve as subject matter expert for the SAP for Financial Services Collection & Disbursement FS-CD SAP module. Good knowledge across all FS-CD Processes like Master data, Postings and documents, Payment plans, Payments, Returns, Direct debit, Disbursements, Clearing, Reversal, Closing operations and Broker collections. • Proficient in FS-CD and well-versed in Subledger concepts, including Shadow G/L. • Skilled in data migration within FS-CD and experienced in batch testing for the same. • Collaborate with SAP functional and technical teams to analyze and test the impact of system modifications when necessary and understanding. Experience writing functional specifications and unit test plans, end user training, integration, functional, regression, UAT, performance and end-to-end testing. Good Experience in understanding ABAP and DB tables, Electronic Data Interchange EDI, User Exits, BAPIs, BADIs and creating queries. Working Conditions/Physical Requirements: Normal Office Working Conditions May be required to work on-call shifts as necessary. Has exposure to SAP ICM module- Incentive Compensation Module Need prior experience as end user: SAP CRM, SAP BI, JIRA, HP ALM We are an equal opportunity employer with a commitment to diversity. All individuals, regardless of personal characteristics, are encouraged to apply. All qualified applicants will receive consideration for employment without regard to age, race, color, national origin, ancestry, sex, sexual orientation, gender, gender identity, gender expression, marital status, pregnancy, religion, physical or mental disability, military or veteran status, genetic information, or any other status protected by applicable state or local law.
Posted 2 weeks ago
4.0 - 8.0 years
5 - 15 Lacs
Thiruvananthapuram
Work from Office
Job Title: Data Associate - Cloud Data Engineering Experience: 4+ Years Employment Type: Full-Time Industry: Information Technology / Data Engineering / Cloud Platforms Job Summary: We are seeking a highly skilled and experienced Senior Data Associate to join our data engineering team. The ideal candidate will have a strong background in cloud data platforms, big data processing, and enterprise data systems, with hands-on experience across both AWS and Azure ecosystems. This role involves building and optimizing data pipelines, managing large-scale data lakes and warehouses, and enabling advanced analytics and reporting. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS Glue, PySpark, and Azure Data Factory. Work with AWS Redshift, Athena, Azure Synapse, and Databricks to support data warehousing and analytics solutions. Integrate and manage data across MongoDB, Oracle, and cloud-native storage like Azure Data Lake and S3. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality datasets. Implement data quality checks, monitoring, and governance practices. Optimize data workflows for performance, scalability, and cost-efficiency. Support data migration and modernization initiatives across cloud platforms. Document data flows, architecture, and technical specifications. Required Skills & Qualifications: 8+ years of experience in data engineering, data integration, or related roles. Strong hands-on experience with: AWS Redshift, Athena, Glue, S3 Azure Data Lake, Synapse Analytics, Databricks PySpark for distributed data processing MongoDB and Oracle databases Proficiency in SQL, Python, and data modeling. Experience with ETL/ELT design and implementation. Familiarity with data governance, security, and compliance standards. Strong problem-solving and communication skills. Preferred Qualifications: Certifications in AWS (e.g., Data Analytics Specialty) or Azure (e.g., Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps for data workflows. Knowledge of data cataloging tools (e.g., AWS Glue Data Catalog, Azure Purview). Exposure to real-time data processing and streaming technologies. Required Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake
Posted 2 weeks ago
3.0 - 8.0 years
27 - 32 Lacs
Bengaluru
Work from Office
Our vision for the future is based on the idea that transforming financial lives starts by giving our people the freedom to transform their own. We have a flexible work environment, and fluid career paths. We not only encourage but celebrate internal mobility. We also recognize the importance of purpose, well-being, and work-life balance. Within Empower and our communities, we work hard to create a welcoming and inclusive environment, and our associates dedicate thousands of hours to volunteering for causes that matter most to them. Chart your own path and grow your career while helping more customers achieve financial freedom. Empower Yourself. Empower is currently recruiting an SAP Lead Configuration Analyst for SAP Business Support team. Viewed as proficient in SAP system implementation and support, this position presents an opportunity to an experienced SAP Business and Configuration professional to take the next steps in their career. Financial Services Collection & Disbursement-Application Lead as part of our established SAP team, under general direction the successful candidate will design, document, implement and support various complex SAP applications. Duties & Responsibilities: Independently work on new projects and support the existing system. Proficiently communicate with business users to gather requirements and perform GAP analysis. Analyze system and business needs to effectively map business processes to SAP application modules. Design system solutions, write functional specification document for various RICEFW objects, perform functional configurations, perform unit testing, coordinate integration testing, support user acceptance testing, and resolve issues reported during hyper-care. Efficiently coordinate with other technical teams (ABAP, Security, Basis, Batch support). Troubleshoot reported system issues and resolve the issues as per priority levels. Complete moderately complex project tasks within defined milestones. Make recommendations for project resource requirements to project managers and/or systems leadership. Maintains requirements documentation, project tracking, and key stakeholder reporting metrics. Will lead or direct projects of smaller scope to less experienced. Work as on-call support (on rotation basis). Suggest process and system improvements. Qualifications: Bachelor’s degree in computer science, Information Systems or Business or equivalent experience. Minimum of 8 years relevant SAP experience. Hands on configuration experience required and required for this position. Experience in implementation of new SAP enhancements and conversions. Experience in setting up SAP banking, treasury and cash management configuration and testing. Experience should include the setup, testing and maintenance of house banks, and the integration of SAP with banks – NACHA files, check formats, and EDI and BAI file formats. The primary function of this role is to support the ongoing FS-CD support and implementation in the system. Serve as subject matter expert for the SAP for Financial Services Collection & Disbursement FS-CD SAP module. Good knowledge across all FS-CD Processes like Master data, Postings and documents, Payment plans, Payments, Returns, Direct debit, Disbursements, Clearing, Reversal, Closing operations and Broker collections. • Proficient in FS-CD and well-versed in Subledger concepts, including Shadow G/L. • Skilled in data migration within FS-CD and experienced in batch testing for the same. • Collaborate with SAP functional and technical teams to analyze and test the impact of system modifications when necessary and understanding. Experience writing functional specifications and unit test plans, end user training, integration, functional, regression, UAT, performance and end-to-end testing. Good Experience in understanding ABAP and DB tables, Electronic Data Interchange EDI, User Exits, BAPIs, BADIs and creating queries. Working Conditions/Physical Requirements: Normal Office Working Conditions May be required to work on-call shifts as necessary. Has exposure to SAP ICM module- Incentive Commissions Module Need prior experience as end user: SAP ChaRM, SAP BI, JIRA, HP ALM We are an equal opportunity employer with a commitment to diversity. All individuals, regardless of personal characteristics, are encouraged to apply. All qualified applicants will receive consideration for employment without regard to age, race, color, national origin, ancestry, sex, sexual orientation, gender, gender identity, gender expression, marital status, pregnancy, religion, physical or mental disability, military or veteran status, genetic information, or any other status protected by applicable state or local law.
Posted 2 weeks ago
5.0 - 10.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Konovo is a global healthcare intelligence company on a mission to transform research through technology- enabling faster, better, connected insights. Konovo provides healthcare organizations with access to over 2 million healthcare professionals—the largest network of its kind globally. With a workforce of over 200 employees across 5 countries: India, Bosnia and Herzegovina, the United Kingdom, Mexico, and the United States, we collaborate to support some of the most prominent names in healthcare. Our customers include over 300 global pharmaceutical companies, medical device manufacturers, research agencies, and consultancy firms. We are expanding our hybrid Bengaluru team to help our transition from a services-based model toward a scalable product and platform-driven organization. As DevOps Engineer you will support the deployment, automation, and maintenance of our software development process and cloud infrastructure on AWS. In this role you will get hands-on experience collaborating with a global, cross-functional team working to improve healthcare outcomes through market research. We are an established but fast-growing business – powered by innovation, data, and technology. Konovo’s capabilities are delivered through our cloud-based platform, enabling customers to collect data from healthcare professionals and transform it into actionable insights using cutting-edge AI combined with proven market research tools and techniques. As DevOps Engineer, you will learn new tools, improve existing systems, and grow your expertise in cloud operations and DevOps practices. What You’ll Do Infrastructure automation using Infrastructure as Code tools. Support and improve CI/CD pipelines for application deployment. Work closely with engineering teams to streamline and automate development workflows. Monitor infrastructure performance and help troubleshoot issues. Contribute to team documentation, knowledge sharing, and process improvements. What We’re Looking For 3+ years of experience in a DevOps or similar technical role. Familiarity with AWS or another cloud provider. Exposure to CI/CD tools such as GitHub Actions, Jenkins, or GitLab CI. Some experience with scripting languages (e.g., Bash, Python) for automation. Willingness to learn and adapt in a collaborative team environment. Nice to Have (Not Required) Exposure to Infrastructure as Code (e.g., CDK, CloudFormation). Experience with containerization technologies (e.g., Docker, ECS). Awareness of cloud security and monitoring concepts. Database management & query optimization experience. Why Konovo? Lead high-impact projects that shape the future of healthcare technology. Be part of a mission-driven company that is transforming healthcare decision-making. Join a fast-growing global team with career advancement opportunities. Thrive in a hybrid work environment that values collaboration and flexibility. Make a real-world impact by helping healthcare organizations innovate faster. This is just the beginning of what we can achieve together. Join us at Konovo and help shape the future of healthcare technology! Apply now to be part of our journey.
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
delhi
On-site
As a DevOps Engineer specializing in App Infrastructure & Scaling, you will be a crucial member of our technology team. Your primary responsibility will involve designing, implementing, and maintaining scalable and secure cloud infrastructure that supports our mobile and web applications. Your contributions will be essential in ensuring system reliability, performance optimization, and cost efficiency across different environments. Your key responsibilities will include designing and managing cloud infrastructure on Google Cloud Platform (GCP), implementing horizontal scaling, load balancers, auto-scaling groups, and performance monitoring systems. You will also be responsible for developing and managing CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Setting up real-time monitoring, crash alerting, logging systems, and health dashboards using industry-leading tools will be part of your daily tasks. You will collaborate closely with Flutter and PHP (Laravel) teams to address performance bottlenecks and reduce system loads. Additionally, you will conduct infrastructure security audits, recommend best practices to prevent downtime and security breaches, and monitor and optimize cloud usage and billing for a cost-effective and scalable architecture. To be successful in this role, you should have at least 3-5 years of hands-on experience in a DevOps or Cloud Infrastructure role, preferably with GCP. Strong proficiency in Docker, Kubernetes, NGINX, and load balancing strategies is essential. Experience with CI/CD pipelines and tools like GitHub Actions, Jenkins, or GitLab CI, as well as familiarity with monitoring tools like Grafana, Prometheus, NewRelic, or Datadog, is required. Deep understanding of API architecture, PHP/Laravel backends, Firebase, and modern mobile app infrastructure is also necessary. Preferred qualifications include Google Cloud Professional certification or equivalent and experience in optimizing systems for high-concurrency, low-latency environments. Familiarity with Infrastructure as Code (IaC) tools such as Terraform or Ansible is a plus. In summary, as a DevOps Engineer specializing in App Infrastructure & Scaling, you will play a critical role in ensuring the scalability, reliability, and security of our cloud infrastructure that powers our applications. Your expertise will contribute to the overall performance and cost efficiency of our systems.,
Posted 2 weeks ago
13.0 - 17.0 years
35 - 45 Lacs
Pune
Hybrid
So, what’s the role all about? This position will lead multiple R&D teams that are developing a portfolio of enterprise grade and cloud scale products. We are looking for someone who is an established R&D leader, passionate about building and operating cloud native and highly distributed products that are used by millions of users in a SaaS business model, has a deep understanding of agile development methods and can lead a team of highly qualified software engineers. How will you make an impact? Work with the line of business to define the product roadmap and strategy. Assist in the development of short, medium, and long-term plans to achieve strategic objectives. Work closely with the product manager, technical architect, QA engineers, technical writer, and software engineers to define/develop features big and small for our products. Actively guide and mentor the team to develop features to meet functional, documentation and quality while obviating roadblocks. Drive and impact all current processes related to software development and improvements across the org as necessary. Manage all people aspects of the team, such as hiring, reviews, mentoring, promotions, etc. Provide worldwide support to our customers. Play major role in envisioning and execution of next gen plans (e.g. architecture) to achieve longer term strategic objectives of the organization. Prioritize, assign and manage department activities and projects in accordance with the R&D departments goals and objectives. Adjust hours of work, priorities and staff assignments to ensure efficient operation based on workload. Have you got what it takes? 12+ years of experience in Software Engineering. At least 3 years' experience in managing multiple teams of software developers including first-line Managers , Architects and Product Managers. Proven track record of managing the development of enterprise-grade software products that can perform, scale, and integrate into a broad enterprise ecosystem. Experience developing and supporting multi-tenant cloud-native software delivered as-a-Service (SaaS). Good exposure to Service Oriented Architecture and associated design patterns for development, deployment, and maintenance. Familiar with DevOps processes and tools employed in SaaS architectures to support CI/CD and monitoring. Familiar with Quality targets and SLAs for SaaS applications. Experience of product development using technologies like Typescript, Nodejs, Mango Familiarity and/or experience with public cloud infrastructures and technologies such as Amazon Web Services (AWS). Experience working abroad or with global teams is preferred. Demonstrated ability to deftly influence others, especially in sensitive or complex situations. Deep experience with agile software development techniques and pitfalls. Excellent communication skills, problem-solving and decision-making skills. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7913 Reporting into: Director / Group Manager Role Type: People Manager
Posted 2 weeks ago
10.0 - 15.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Description: Boomi India Lab (11013020) Requirements: Job Description AWS (VPC/ECS/EC2/CloudFormation/RDS) Artifactory Some knowledge with CircleCI/Saltstack is preferred but not required Responsibilities: Manage containerized applications using kubernetes, Docker, etc. Automate Build/deployments (CI&CD) & other repetitive tasks using shell/Python scripts or tools like Ansible, Jenkins, etc Coordinate with development teams to fix issues, release new code Setup configuration management using tools like Ansible etc. Implement High available, auto-scaling, Fault tolerant, secure setup Implement automated jobs tasks like backups, cleanup, start-stop, reports. Configure monitoring alerts/alarms and act on any outages/incidents Ensure that the infrastructure is secured and can be accessed from limited IPs and ports Understand client requirements propose solutions and ensure delivery Innovate and actively look for improvements in overall infrastructure Must Have: Bachelors Degree, with at least 7+ year experience in DevOps Should have worked on various DevOps tools like: GitLab, Jenkins, SonarQube, Nexus, Ansible etc. Should have worked on various AWS Services -EC2, S3, RDS, CloudFront, CloudWatch, CloudTrail, Route53, ECS, ASG, Route53 etc. Well-versed with shell/python scripting & Linux Well-versed with Web-Servers (Apache, Tomcat etc) Well-versed with containerized application (Docker, Docker-compose, Docker-swarm, Kubernetes) Have worked on Configuration management tools like Puppet, Ansible etc. Have experience in CI/CD implementation (Jenkins, Bamboo, etc..) Self-starter and ability to deliver under tight timelines Good to have: Exposure to various tools like New Relic, ELK, Jira, confluence etc Prior experience in managing infrastructure for public facing web-applications. Prior experience in handling client communications Basic Networking knowledge – VLAN, Subnet, VPC, etc. Knowledge of databases (PostgreSQL). Key Skills- Must have Jenkins, Docker, Python, Groovy, Shell-Scripting, Artifactory, Gitlab, Terraform, VM Ware,PostgreSQL, AWS, Kafka Job Responsibilities: Responsibilities: Manage containerized applications using kubernetes, Docker, etc. Automate Build/deployments (CI&CD) & other repetitive tasks using shell/Python scripts or tools like Ansible, Jenkins, etc Coordinate with development teams to fix issues, release new code Setup configuration management using tools like Ansible etc. Implement High available, auto-scaling, Fault tolerant, secure setup Implement automated jobs tasks like backups, cleanup, start-stop, reports. Configure monitoring alerts/alarms and act on any outages/incidents Ensure that the infrastructure is secured and can be accessed from limited IPs and ports Understand client requirements propose solutions and ensure delivery Innovate and actively look for improvements in overall infrastructure What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 2 weeks ago
3.0 - 5.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Overview Analyzes, develops, designs, and maintains software for the organization's products and systems. Performs system integration of software and hardware to maintain throughput and program consistency. Develops, validates, and tests: structures and user documentation. Work may be reviewed for accuracy and overall adequacy. Follows established processes and directions. We are seeking a passionate and detail-oriented SDET to join our QA engineering team. The ideal candidate will have strong Python programming skills and hands-on experience testing cloud-native microservices deployed on Google Cloud Platform (GCP) or equivalant Cloud platform . You will be responsible for designing and implementing automated test frameworks, ensuring the reliability and scalability of distributed systems. Responsibilities SDET – Python & GCP Microservices Testing Experience Level: 2–4 Years Key Responsibilities Develop and maintain automated test suites using Python and frameworks like Pytest , Robot Framework Design and execute test cases for RESTful APIs , microservices , and event-driven architectures Collaborate with developers and DevOps to integrate tests into CI/CD pipelines (e.g., Jenkins, GitHub Actions) Perform functional , integration , regression , and performance testing Validate deployments and configurations in GCP environments using tools like Cloud Build , GKE , and Terraform Monitor and troubleshoot test failures, log issues, and ensure timely resolution Contribute to test strategy, documentation, and quality metrics Implement AI-driven testing methodologies to enhance test coverage and efficiency. Required Skills 2–4 years of experience in software testing or SDET roles Strong proficiency in Python for test automation Experience testing microservices and cloud-native applications Familiarity with GCP services such as Cloud Functions , Pub/Sub , Cloud Run , and GKE Hands-on experience with Docker , Kubernetes , and Linux-based environments Knowledge of Git , Jenkins , and CI/CD workflows Understanding of QA methodologies , SDLC , and Agile practices Preferred Skills Exposure to performance testing tools like JMeter , Locust , or k6 Experience with Jenkins Familiarity with Github Co-Pilot MongoDB Knowledge of BDD/TDD practices Qualifications SDET – Python & GCP Microservices Testing Experience Level: 2–4 Years Required Skills 2–4 years of experience in software testing or SDET roles Strong proficiency in Python for test automation Experience testing microservices and cloud-native applications Familiarity with GCP services such as Cloud Functions , Pub/Sub , Cloud Run , and GKE Hands-on experience with Docker , Kubernetes , and Linux-based environments Knowledge of Git , Jenkins , and CI/CD workflows Understanding of QA methodologies , SDLC , and Agile practices Preferred Skills Exposure to performance testing tools like JMeter , Locust , or k6 Experience with Jenkins Familiarity with Github Co-Pilot MongoDB Knowledge of BDD/TDD practices Nice-to-Have - Experience with LLMs and generative AI platforms. - Contributions to open-source testing frameworks or AI communities. Preferred Education: Bachelor's or Masters degree in an appropriate engineering discipline required. Preferred Work Experience (years):Bachelors degree and 2+ years or Masters degree with no experience. All Other Regions: Bachelor’s or Master’s degree in an appropriate engineering discipline required Preferred work experience (years): 2+ years experience
Posted 2 weeks ago
1.0 - 5.0 years
8 - 12 Lacs
Ahmedabad
Work from Office
About the Company e.l.f. Beauty, Inc. stands with every eye, lip, face and paw. Our deep commitment to clean, cruelty free beauty at an incredible value has fueled the success of our flagship brand e.l.f. Cosmetics since 2004 and driven our portfolio expansion. Today, our multi-brand portfolio includes e.l.f. Cosmetics, e.l.f. SKIN, pioneering clean beauty brand Well People, Keys Soulcare, a groundbreaking lifestyle beauty brand created with Alicia Keys and Naturium, high-performance, biocompatible, clinically-effective and accessible skincare. In our Fiscal year 24, we had net sales of $1 Billion and our business performance has been nothing short of extraordinary with 24 consecutive quarters of net sales growth. We are the #2 mass cosmetics brand in the US and are the fastest growing mass cosmetics brand among the top 5. Our total compensation philosophy offers every full-time new hire competitive pay and benefits, bonus eligibility (200% of target over the last four fiscal years), equity, and a hybrid 3 day in office, 2 day at home work environment. We believe the combination of our unique culture, total compensation, workplace flexibility and care for the team is unmatched across not just beauty but any industry. Visit our Career Page to learn more about our team: https://www.elfbeauty.com/work-with-us Job Summary: We are seeking a highly skilled and experienced Senior AI Engineer to lead the design, development, and deployment of advanced AI solutions across our enterprise. The ideal candidate will have a deep understanding of AI/ML algorithms, scalable systems, and data engineering best practices. Responsibilities Design and develop production-grade AI and machine learning models for real-world applications (e.g., recommendation engines, NLP, computer vision, forecasting). Lead model lifecycle management from experimentation and prototyping to deployment and monitoring. Collaborate with cross-functional teams (product, data engineering, MLOps, and business) to define AI-driven features and services. Perform feature engineering, data wrangling, and exploratory data analysis on large-scale structured and unstructured datasets. Build and maintain scalable AI infrastructure using cloud services (AWS, Azure, GCP) and MLOps best practices. Mentor junior AI engineers, guiding them in model development, evaluation, and deployment. Continuously improve model performance by leveraging new research, retraining on new data, and optimizing pipelines. Stay current with the latest developments in AI, machine learning, and deep learning through research, conferences, and publications. Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning or related field. 14+ years of IT experience with a minimum of 6+ years of AI Experience in AI engineering, particularly with building LLM-based applications and prompt-driven architectures. Solid understanding of Retrieval-Augmented Generation (RAG) patterns and vector databases (especially Qdrant ). Hands-on experience in deploying and managing containerized services in AWS ECS and using CloudWatch for logs and diagnostics. Familiarity with AWS Bedrock and working with foundation models through its managed services. Experience working with AWS RDS (MySQL or MariaDB) for structured data storage and integration with AI workflows. Practical experience with LLM fine-tuning techniques, including full fine-tuning, instruction tuning, and parameter-efficient methods like LoRA or QLoRA. Strong understanding of recent AI advancements such as multi-agent systems, AI assistants, and orchestration frameworks. Proficiency in Python and experience working directly with LLM APIs (e.g., OpenAI, Anthropic, or similar). Comfortable working in a React frontend environment and integrating backend APIs. Experience with CI/CD pipelines and infrastructure as code (e.g., Terraform, AWS CDK). Minimum Work Experience 6 Maximum Work Experience 15 This job description is intended to describe the general nature and level of work being performed in this position. It also reflects the general details considered necessary to describe the principal functions of the job identified, and shall not be considered, as detailed description of all the work required inherent in the job. It is not an exhaustive list of responsibilities, and it is subject to changes and exceptions at the supervisors’ discretion. e.l.f. Beauty respects your privacy. Please see our Job Applicant Privacy Notice (www.elfbeauty.com/us-job-applicant-privacy-notice) for how your personal information is used and shared.
Posted 2 weeks ago
3.0 - 8.0 years
5 - 8 Lacs
Hyderabad
Work from Office
Job Skills: Job Type: Job Location: Exp Level: Job Summary We are seeking a passionate and skilled Generative AI Engineer with 3+ years of experience in building LLM-based applications. The ideal candidate should have hands-on experience with multi-agent frameworks (LangGraph, CrewAI), agent orchestration, RAG pipelines, and Chain of Thought (CoT) reasoning to develop intelligent, context-aware systems. Key Responsibilities Design and develop multi-agent LLM applications using frameworks like LangGraph and CrewAI. Build, configure, and orchestrate intelligent agents and tools for autonomous task execution. Implement RAG (Retrieval-Augmented Generation) and Agentic RAG pipelines to augment LLMs with external data sources. Develop and fine-tune prompts to support Chain of Thought and step-wise reasoning. Integrate vector databases (e.g., FAISS, Pinecone, Chroma) for context retrieval. Collaborate with frontend/backend teams to embed GenAI capabilities into products. Required Skills Strong programming experience with Python. Hands-on with LangGraph, CrewAI, LangChain, or other agent frameworks. Solid understanding of LLMs, prompt engineering, and tool invocation patterns. Experience with RAG, vector databases, embeddings, and document chunking. Knowledge of Chain of Thought prompting, memory handling, and agent planning logic. Familiarity with OpenAI, Hugging Face, or other LLM APIs. Nice to Have Experience deploying GenAI apps with FastAPI, Streamlit, or React.js. Understanding of semantic search, knowledge graphs, or graph databases. Exposure to MLOps, Docker, CI/CD pipelines, and cloud deployment (AWS/GCP/Azure). Awareness of ethical AI and data governance practices. We use cookies to improve your experience on our website. By browsing this website, you agree to our use of cookies. Start typing to see posts you are looking for.
Posted 2 weeks ago
3.0 - 5.0 years
3 - 8 Lacs
Noida
Work from Office
Strong proficiency in React.js , including Redux Toolkit (RTK) for state management. Solid experience with Node.js , Express.js , and building RESTful APIs. Proficiency in JavaScript and TypeScript . Experience integrating APIs with frontend frameworks. Strong understanding of HTML5 , CSS3 , and responsive design principles. Experience with Git , CI/CD tools , and Agile development practices . Exposure to PostgreSQL , or any relational database.. Good to have knowledge on AWS . Total Experience Expected: 04-06 years
Posted 2 weeks ago
10.0 - 15.0 years
30 - 45 Lacs
Bengaluru
Work from Office
Position Overview As an Engineering Manager, you will lead a team of software engineers in building scalable, reliable, and efficient web applications and microservices in the for “ News and Competitive Data Analysis Platform ” . You will drive technical excellence, system architecture, and best practices while fostering a high-performance engineering culture. You’ll be responsible for managing engineering execution, mentoring engineers, and ensuring the timely delivery of high-quality solutions. Your expertise in Python , Django, React, Apache Solr , RabbitMQ, and Postges and other NoSQL cloud databases will help shape the technical strategy of our SaaS platform. You will collaborate closely with Product, Design, and DevOps teams to align engineering efforts with business goals. Key Responsibilities: Lead the design and development of internal platforms that empower product teams to build, deploy, and scale services seamlessly. Integrate AI/ML capabilities into platform tools—such as semantic search, intelligent alerting, auto-scaling, and workflow automation. Optimize distributed backend systems (using Celery, RabbitMQ) for efficiency, reliability, and performance across tasks like crawling, processing, and notifications. Collaborate closely with DevOps, SRE, data, and ML teams to build secure, observable, and scalable infrastructure across AWS and GCP. Drive cloud modernization, including the strategic migration from GCP to AWS, standardizing on containerization and CI/CD best practices. Foster a culture of platform ownership, engineering excellence, and continuous improvement across tooling, monitoring, and reliability. Mentor engineers, enabling them to grow technically while influencing platform architecture and cross-team impact. Required Experience/Skills : 8+ years of experience in backend or platform engineering, with 2+ years in a technical leadership role. Strong hands-on experience with Python (Django, Celery), React, and building scalable distributed systems. Deep knowledge of message brokers (RabbitMQ), PostgreSQL, and search technologies like Apache Solr or Elasticsearch. Exposure to AI/ML technologies—such as NLP, semantic search, LLMs, or vector databases (e.g., FAISS, Pinecone). Experience with CI/CD pipelines, container orchestration (Docker, Kubernetes), and observability tools. Proven ability to lead cloud migration initiatives and manage infrastructure across AWS/GCP. A platform-first mindset—focused on developer productivity, reusability, scalability, and system performance.
Posted 2 weeks ago
14.0 - 20.0 years
15 - 20 Lacs
Pune
Hybrid
So, what’s the role all about? We are looking for a highly skilled and motivated Site Reliability Engineering (SRE) Manager to lead a team of SREs in designing, building, and maintaining scalable, reliable, and secure infrastructure and services. You will work closely with engineering, product, and security teams to improve system performance, availability, and developer productivity through automation and best practices. How will you make an impact? Build server-side software using Java Lead and mentor a team of SREs; support their career growth and ensure strong team performance. Drive initiatives to improve availability, reliability, observability, and performance of applications and infrastructure. Establish SLOs/SLAs and implement monitoring systems, dashboards, and alerting to measure and uphold system health. Develop strategies for incident management, root cause analysis, and postmortem reporting. Build scalable automation solutions for infrastructure provisioning, deployments, and system maintenance. Collaborate with cross-functional teams to design fault-tolerant and cost-effective architectures. Promote a culture of continuous improvement and reliability-first engineering. Participate in capacity planning and infrastructure scaling. Manage on-call rotations and ensure incident response processes are effective and well-documented. Work in a fast-paced, fluid landscape while managing and prioritizing multiple responsibilities Have you got what it takes? Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 10+ years of overall experience in SRE/DevOps roles, with at least 2 years managing technical teams. Proficiency in at least one programming language (e.g., Python, Go, Java, C#) and experience with scripting languages (e.g., Bash, PowerShell). Deep understanding of cloud computing platforms (e.g., AWS), the working and reliability constraints of some of the prominent services (e.g., EC2, ECS, Lambda, DynamoDB etc) Experience with infrastructure as code tools such as CloudFormation, Terraform. Deep understanding of CI/CD concepts and experience with CI/CD tools such as Jenkins, GitLab CI/CD, or CircleCI. Strong knowledge of containerization technologies (e.g., Docker, Kubernetes) and microservices architecture. Experience with monitoring and observability tools (e.g., Prometheus, Grafana, ELK). Working experience of Grafana Observability Suite (Loki, Mimir, Tempo). Experience in implementing OpenTelemetry protocol in Microservice environment. Excellent problem-solving skills and the ability to troubleshoot complex issues in distributed systems. Experience of Incident management and blameless postmortems that includes driving the incident response efforts during outages and other critical incidents, resolution, and communication in a cross-functional team setup. Good to have skills: Handson experience of working with large Kubernetes Cluster. Certification will be an added plus. Administration and/or development experience of standard monitoring and automation tools such as Splunk, Datadog, Pagerduty Rundeck. Familiarity with configuration management tools like Ansible, Puppet, or Chef. Certifications such as AWS Certified DevOps Engineer, Google Cloud Professional DevOps Engineer, or equivalent.
Posted 2 weeks ago
3.0 - 6.0 years
13 - 17 Lacs
Mumbai
Work from Office
Overview We are seeking a detail-oriented and technically skilled Senior Associate – Quality Assurance professional to join our Private Capital Solutions (PCS) team. This role is critical in ensuring the delivery of high-quality software products through a combination of manual, automation, and performance testing practices. The ideal candidate will have hands-on experience with modern QA frameworks and tools like Playwright, Azure DevOps, and JMeter, and will contribute to automation strategy, execution, and team guidance across key PCS initiatives including investment dashboards and analytics. Ensure high-quality delivery of software through rigorous QA processes across manual, automation, and performance testing. Support both BAU QA tasks and new initiatives by executing comprehensive test coverage across platforms, APIs, and dashboards. Collaborate closely with developers, product owners, and business analysts to align QA deliverables with business and release priorities. Contribute to automation framework maintenance, regression test planning, and release readiness assessments. Participate in Agile ceremonies such as sprint planning, backlog grooming, and QA effort estimation. Mentor junior QA team members and share best practices in tools, frameworks, and quality standards. Responsibilities Automation Testing: Design and implement automated test scripts using Microsoft Playwright with Page Object Model (POM) design patterns. Build and maintain custom automation frameworks for both UI and API validations. Integrate automated test suites into Azure DevOps pipelines using YAML; CI/CD pipeline knowledge preferred. Debug script failures, enhance selector logic (including XPath), and leverage AI tools (e.g., Cursor, ChatGPT) to improve QA efficiency. Manual Testing: Analyze business and technical requirements to create clear, comprehensive test cases. Execute functional, regression, and exploratory testing; log and track defects in JIRA. Collaborate with cross-functional teams to ensure timely and accurate issue resolution. Validate entitlement-based access, filter workflows, and data correctness across dashboards and investment modules. Performance Testing: Develop performance test plans and scripts using Apache JMeter. Configure JMeter components like samplers, controllers, listeners, and assertions for realistic load testing. Analyze performance metrics (response time, throughput, errors) and identify optimization areas. Team & Process Contribution: Act as a QA point-of-contact in cross-functional initiatives. Drive improvements in test coverage, execution speed, and defect identification. Promote a culture of quality ownership, code/test review, and continuous learning. Qualifications 8-9 years of hands-on experience in QA (manual, automation, and/or performance testing). Strong expertise in Playwright (or Selenium/Cypress), JavaScript/TypeScript, REST API validation, and Postman. Good understanding of financial services domain, especially private capital, investments, or portfolio analytics (preferred). Experience in creating and managing test automation pipelines in Azure DevOps. Proficiency in SQL, scripting (JavaScript or Groovy), and performance testing using JMeter. Familiarity with Agile methodologies and tools like JIRA, Confluence, Git, and VS Code. Strong analytical skills, attention to detail, and problem-solving mindset. Bachelor's degree in Computer Science, Information Technology, or a related field. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 2 weeks ago
3.0 - 4.0 years
3 - 7 Lacs
Mumbai
Work from Office
Job Summary We are seeking an experienced and motivated Data Engineer to join our growing team, preferably with experience in the Banking, Financial Services, and Insurance (BFSI) sector. The ideal candidate will have a strong background in designing, building, and maintaining robust and scalable data infrastructure. You will play a crucial role in developing our data ecosystem, ensuring data quality, and empowering data-driven decisions across the organization. This role requires hands-on experience with the Google Cloud Platform (GCP) and a passion for working with cutting-edge data technologies. Responsibilities Design and Develop End-to-End Data Engineering Pipelines: Build, and maintain scalable and reliable data pipelines to ingest, process, and transform large volumes of structured and unstructured data from various sources. Implement Data Quality and Governance: Establish and enforce processes for data validation, transformation, auditing, and reconciliation to ensure data accuracy, completeness, and consistency. Build and Maintain Data Storage Solutions: Design, implement, and manage data vault and data mart to support business intelligence, analytics, and reporting requirements. Orchestrate and Automate Workflows: Utilize workflow management tools to schedule, monitor, and automate complex data workflows and ETL processes. Optimize Data Infrastructure: Continuously evaluate and improve the performance, reliability, and cost-effectiveness of our data infrastructure and pipelines. Collaborate with Stakeholders: Work closely with data analysts, data scientists, and business stakeholders to understand their data needs and deliver effective data solutions. Documentation: Create and maintain comprehensive documentation for data pipelines, processes, and architectures. Key Skills Python: Proficient in Python for data engineering tasks, including scripting, automation, and data manipulation. PySpark: Strong experience with PySpark for large-scale data processing and analytics. SQL: Expertise in writing complex SQL queries for data extraction, transformation, and analysis. Tech Stack (Must Have) Google Cloud Platform (GCP): Dataproc: For managing and running Apache Spark and Hadoop clusters. Composer (Airflow): For creating, scheduling, and monitoring data workflows. Cloud Functions: For event-driven serverless data processing. Cloud Run: For deploying and scaling containerized data applications. Cloud SQL: For managing relational databases. BigQuery: For data warehousing, analytics, and large-scale SQL queries. Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 3+ years of proven experience in a Data Engineer role. Demonstrable experience with the specified "must-have" tech stack. Strong problem-solving skills and the ability to work independently and as part of a team. Excellent communication and interpersonal skills. Good to Have Experience in the BFSI (Banking, Financial Services, and Insurance) domain. Apache NiFi: Experience with data flow automation and management. Qlik: Familiarity with business intelligence and data visualization tools. AWS: Knowledge of Amazon Web Services data services. DevOps and FinOps: Understanding of DevOps principles and practices (CI/CD, IaC) and cloud financial management (FinOps) to optimize cloud spending.
Posted 2 weeks ago
8.0 - 10.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Job Overview We are looking for a visionary Lead DevOps Engineer with a strong background in architecting scalable and secure cloud-native solutions on AWS. This leadership role will drive DevOps strategy, design cloud architectures, and mentor a team of engineers while ensuring operational excellence and reliability across infrastructure and deployments. The ideal candidate will: Architect and implement scalable, highly available, and secure infrastructure on AWS. Define and enforce DevOps best practices across CI/CD, IaC, observability, and container orchestration. Lead the adoption and optimization of Kubernetes for scalable microservices infrastructure. Develop standardized Infrastructure as Code (IaC) frameworks using Terraform or CloudFormation. Champion automation at every layer of infrastructure and application delivery pipelines. Collaborate with cross-functional teams (Engineering, SRE, Security) to drive cloud-native transformation. Provide technical mentorship to junior DevOps engineers, influencing design and implementation decisions. Primary Skills Bachelor's degree in Computer Science, Information Technology, or a related field. 7+ years of DevOps or Cloud Engineering experience with strong expertise in AWS. Proven experience designing and implementing production-grade cloud architectures. Hands-on experience with containerization and orchestration (Docker, Kubernetes). Proficient in building CI/CD workflows using Jenkins and/or GitHub Actions. Deep understanding of Infrastructure as Code using Terraform (preferred) or CloudFormation. Strong scripting/automation expertise in Python or Go. Familiarity with service mesh, secrets management, and policy as code (e.g., Istio, Vault, OPA). Strong problem-solving and architectural thinking skills. Excellent verbal and written communication skills with a track record of technical leadership. AWS Certified Solutions Architect (Professional/Associate), CKA/CKAD, or Terraform Associate is a plus. Good to Have Skills Exposure to AI & ML Exposure to cloud cost optimization and FinOps practices. Roles and Responsibilities Lead the architecture and implementation of scalable, secure, and cost-efficient infrastructure solutions on AWS. Define Kubernetes cluster architecture, implement GitOps/ArgoCD-based deployment models, and manage multi-tenant environments. Establish and maintain standardized CI/CD pipelines with embedded security and quality gates. Build and maintain reusable Terraform modules to enable infrastructure provisioning at scale across multiple teams. Drive observability strategy across all services, including metric collection, alerting, and logging with tools like Prometheus, Datadog, CloudWatch, and ELK. Automate complex operational workflows and disaster recovery processes using Python/Go scripts and native AWS services. Review and approve high-level design documents and support platform roadmap planning. Mentor junior team members and foster a culture of innovation, ownership, and continuous improvement. Stay abreast of emerging DevOps and AWS trends, and drive adoption of relevant tools and practices.
Posted 2 weeks ago
6.0 - 10.0 years
15 - 25 Lacs
Chennai
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As an AWS Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Key Responsibilities: 1. Data Pipeline Design & Development Design and develop scalable, resilient, and secure ETL/ELT data pipelines using AWS services. Build and optimize data workflows leveraging AWS Glue, EMR, Lambda, and Step Functions. Implement batch and real-time data ingestion using Kafka, Kinesis, or AWS Data Streams. Ensure efficient data movement across S3, Redshift, DynamoDB, RDS, and Snowflake. 2. Cloud Data Engineering & Storage Architect and manage data lakes and data warehouses using Amazon S3, Redshift, and Athena. Optimize data storage and retrieval using Parquet, ORC, Avro, and columnar storage formats. Implement data partitioning, indexing, and query performance tuning. Work with NoSQL databases (DynamoDB, MongoDB) and relational databases (PostgreSQL, MySQL, Aurora). 3. Infrastructure as Code (IaC) & Automation Deploy and manage AWS data infrastructure using Terraform, AWS CloudFormation, or AWS CDK. Implement CI/CD pipelines for automated data pipeline deployments using GitHub Actions, Jenkins, or AWS CodePipeline. Automate data workflows and job orchestration using Apache Airflow, AWS Step Functions, or MWAA. 4. Performance Optimization & Monitoring Optimize Spark, Hive, and Presto queries for performance and cost efficiency. Implement auto-scaling strategies for AWS EMR clusters. Set up monitoring, logging, and alerting with AWS CloudWatch, CloudTrail, and Prometheus/Grafana. 5. Security, Compliance & Governance Implement IAM policies, encryption (AWS KMS), and role-based access controls. Ensure compliance with GDPR, HIPAA, and industry data governance standards. Monitor data pipelines for security vulnerabilities and unauthorized access. 6. Collaboration & Stakeholder Engagement Work closely with data analysts, data scientists, and business teams to understand data needs. Document data pipeline designs, architecture decisions, and best practices. Mentor and guide junior data engineers on AWS best practices and optimization techniques. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience 7+ years of experience in data engineering with a focus on AWS cloud technologies. Expertise in AWS Glue, Lambda, EMR, Redshift, Kinesis , and Step Functions. Proficiency in SQL, Python, Java and PySpark for data transformations. Strong understanding of ETL/ELT best practices and data warehousing concepts. Experience with Apache Airflow or Step Functions for orchestration. Familiarity with Kafka, Kinesis, or other streaming platforms. Knowledge of Terraform, CloudFormation, and DevOps for AWS. Expertise in data mining, data storage, and Extract-Transform-Load (ETL) processes. Experience in data pipelines development and tooling, such as Glue, Databricks, Synapse, or Dataproc. Experience with both relational and NoSQL databases, including PostgreSQL, DB2, and MongoDB. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously while maintaining attention to detail. Communication skills: Ability to communicate with both technical and non-technical colleagues to derive technical requirements from business needs and problems. Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 3 weeks ago
4.0 - 8.0 years
12 - 15 Lacs
Faridabad
Work from Office
We are seeking a detail-oriented QA Engineer with expertise in Dealer Portal applications. The ideal candidate will have strong experience in manual testing and good to have hands-on knowledge of automation using Playwright and Tricentis. You will be responsible for validating native functionalities through frontend validation and backend validation includes API testing and checking database records using SQL queries and joins and ensuring the accuracy of data flow across the systems. Good experience on API Testing through Postman.. Roles and Responsibilities Key Responsibilities: Design, develop, and execute manual and automated test cases based on business and technical requirements. Validate native functionalities of dealer portal applications. Ensure data accuracy by cross-checking data against database records. Develop and execute test plans, test cases, and test scripts for different modules. Collaborate closely with developers, business analysts, and product owners to understand requirements and resolve defects. Perform regression, integration, End to End and system testing to ensure software quality. Identify, document, and track defects using tools like JIRA, Xray. Participate in sprint ceremonies and contribute to continuous delivery pipelines Proactively identify areas for quality improvement and work with QA chapter to mitigate the risks. Required Skills & Qualifications: 3-5 years of experience in Software Quality Assurance Strong knowledge of QA methodologies, test planning (strong manual testing experience) and execution. Experience in testing APIs, integrations, and web-based applications. Proficiency in SQL (basic queries, joins, data validation). Good to have hands-on experience with Java/Typescript-based automation testing (Playwright), Postman and Jmeter. Familiarity with Agile development processes and CI/CD pipelines (e.g., Jenkins, Azure DevOps) Strong analytical and troubleshooting skills. Experience with test management tools like JIRA, Xray. Excellent communication skills.
Posted 3 weeks ago
2.0 - 7.0 years
4 - 9 Lacs
Mumbai
Work from Office
We are seeking a skilled and experienced Software Engineer to join our dynamic development team. In this role, you will be responsible for developing, implementing, and maintaining software applications across the entire technology stack. This role requires the candidate to be a proficient Backend Engineer and possess intermediate working knowledge of Python, SQL & API.ResponsibilitiesDesign, develop, and maintain scalable backend systems and APIs.Optimize application performance and ensure system reliability.Collaborate with frontend engineers, product managers, and other stakeholders.Implement security best practices to protect data and infrastructure.Write clean, maintainable, and well-documented code.Monitor, debug, and troubleshoot backend (Python & SQL) issues.Requirements2+ years of prior experience in similar rolesStrong proficiency in backend programming languages (e.g., Python).Experience with RESTful APIs, and proficiency in database management (SQL, NoSQL).Hands-on experience with cloud services (AWS) would be an added advantage.Basic familiarity with DevOps tools, CI/CD pipelines, and containerization (Docker, Kubernetes) is expected.DesirableWorking knowledge of Python, SQL, AWS platform & services. Understanding of Restful APIs. Familiarity with broker technology (i.e. RabbitMQ, ActiveMQ, SNS, SQS) is good to have. Experience working with sports data, and good understanding of Cricket would be an added advantage.Availability to work on On-Call support during critical events & seasons is expected
Posted 3 weeks ago
5.0 - 8.0 years
2 - 6 Lacs
Hyderabad
Work from Office
Total Yrs. of Experience* 8 Relevant Yrs. of experience* 5 + Detailed JD *(Roles and Responsibilities) Roles & Responsibilities: Creating and modifying terraform files to accommodate the evolving business needs Running the CI/CD checks (Teamcity / Jenkins) and analyzing them upon failure Identifying and rectifying Data flow issues within Snowflake. Addressing customer requests raised by Engineering team and Data Analysts consuming data. Should have prior experience working on terraform files. Should be able to read and comprehend existing terraform files. Should be able to make changes to the existing files based on the requirement. Should be able to create new terraform files based on the introduction of a new service. Qualifications: 5+ Experience with Terraform Should have prior experience working on terraform files. Mandatory skills* 1 . Terraform 2. DevOps 3.AWS, kafka 4. spark,SQL,DBT Desired skills* 1. Cloud infrastructure specialist 2. Expert in DevOps
Posted 3 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
As a Full Stack developer , you will design and implement end-to-end solutions by developing and maintaining complex React/Redux front-ends, building robust ASP.NET Core Web API endpoints with secure JWT authentication, optimizing client-server data exchange, managing MySQL database via procedures and Entity Framework Core, and collaborating on CI/CD pipelines in Azure DevOps or AWS. We're seeking an Adtech pro who thrives in a team environment, possesses exceptional communication and analytical skills, and can navigate high-pressure demands of delivering results, taking ownership, and leveraging sales opportunities. Responsibilities: Develop/maintain React frontend with complex state management Build robust C# Web API endpoints Implement secure authentication flows Optimize API-client data exchange Maintain SQL databases via EF Core Collaborate on CI/CD pipelines Requirements +5 years of experience with C#/.NET Core +3 years of experience with React/Redux proficiency (hooks, context API) Experience with REST API development (ASP.NET Core Web API) Proficient with SQL/Entity Framework Core Experience with Styled-components/JSS JWT authentication implementation Complex form validation patterns Git flow/trunk-based development Azure DevOps/AWS experience Fluency in English - a must
Posted 3 weeks ago
1.0 - 4.0 years
6 - 10 Lacs
Gurugram
Work from Office
We are seeking a skilled and motivated .NET Core Developer to join our dynamic development team. The ideal candidate will have strong expertise in .NET Core, SQL Server 2022, Angular, React, and jQuery, and will play a key role in designing and developing robust, scalable enterprise-level applications. A critical part of this role includes working with business-critical modules , particularly within Project Lifecycle Management (PLM) systems. You will collaborate closely with cross-functional teams to deliver high-quality solutions aligned with business goals. Key Responsibilities: Design, develop, and maintain web-based applications using .NET Core, Angular, React, and jQuery. Write efficient SQL queries and manage data using SQL Server 2022. Collaborate with UI/UX designers, backend developers, and product managers to implement functional and intuitive solutions. Integrate and maintain features related to Project Lifecycle Management and other business modules. Ensure the performance, quality, and responsiveness of applications. Troubleshoot, debug, and upgrade existing software. Participate in code reviews and follow best practices for software development. Write and maintain clear technical documentation. Required Skills Qualifications: Strong proficiency in .NET Core (C#) and SQL Server 2022. Experience with front-end technologies: Angular, React, and jQuery. Solid understanding of software development life cycle (SDLC) and Agile methodologies. Knowledge of enterprise application architecture and design patterns. Familiarity with Project Lifecycle Management (PLM) systems is highly desirable. Excellent problem-solving and communication skills. Ability to work independently and within a team environment. Preferred Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Experience with cloud platforms (e.g., Azure, AWS) is a plus. Familiarity with DevOps practices and CI/CD pipelines.
Posted 3 weeks ago
4.0 - 7.0 years
10 - 15 Lacs
Pune
Hybrid
So, what’s the role all about? We are seeking a skilled Senior Data Engineer to join our Actimize Watch Data Analytics team. In this role, you will collaborate closely with the Data Science team, Business Analysts, SMEs to monitor and optimize the performance of machine learning models. You will be responsible for running various analytics on data stored in S3, using advanced Python techniques, generating performance reports & visualization in Excel, and showcasing model performance & stability metrics through BI tools such as Power BI and Quick Sight. How will you make an impact? Data Integration and Management: Design, develop, and maintain robust Python scripts to support analytics and machine learning model monitoring. Ensure data integrity and quality across various data sources, primarily focusing on data stored in AWS S3. Check the data integrity & correctness of various new customers getting onboarded to Actimize Watch Analytics and Reporting: Work closely with Data Scientists, BAs & SMEs to understand model requirements and monitoring needs. Perform complex data analysis as well as visualization using Jupyter Notebooks, leveraging advanced Python libraries and techniques. Generate comprehensive model performance & stability reports, showcase them in BI tools. Standardize diverse analytics processes through automation and innovative approaches. Model Performance Monitoring: Implement monitoring solutions to track the performance and drift of machine learning models in production for various clients. Analyze model performance over time and identify potential issues or areas for improvement. Develop automated alerts and dashboards to provide real-time insights into model health. Business Intelligence and Visualization: Create and maintain dashboards in BI tools like Tableau, Power BI and QuickSight to visualize model performance metrics. Collaborate with stakeholders to ensure the dashboards meet business needs and provide actionable insights. Continuously improve visualization techniques to enhance the clarity and usability of the reports. Collaboration and Communication: Work closely with cross-functional teams, including Data Scientists, Product Managers, Business Analysts and SMEs to understand requirements and deliver solutions. Communicate findings and insights effectively to both technical and non-technical stakeholders. Provide support and training to team members on data engineering and analytics best practices and tools. Have you got what it takes? 5 to 7 years of experience in data engineering, with a focus on analytics, data science and machine learning model monitoring. Proficiency in Python and experience with Jupyter Notebooks for data analysis. Strong experience with AWS services, particularly S3 and related data processing tools. Expertise in Excel for reporting and data manipulation. Hands-on experience with BI tools such as Tableau, Power BI and QuickSight. Solid understanding of machine learning concepts and model performance metrics. Strong Python & SQL skills for querying and manipulating large datasets. Excellent problem-solving and analytical skills. Ability to work in a fast-paced, collaborative environment. Strong communication skills with the ability to explain technical concepts to non-technical stakeholders. Preferred Qualifications: Experience with other AWS services like S3, Glue as well as BI tools like QuickSight & PowerBI Familiarity with CI/CD pipelines and automation tools. Knowledge of data governance and security best practices. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7900 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 3 weeks ago
2.0 - 4.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Simreka (Devtaar GmbH) is looking for DevOps Engineer to join our dynamic team and embark on a rewarding career journey Collaborating with coworkers to conceptualize, develop, and release software. Conducting quality assurance to ensure that the software meets prescribed guidelines. Rolling out fixes and upgrades to software, as needed. Securing software to prevent security breaches and other vulnerabilities. Collecting and reviewing customers' feedback to enhance user experience. Suggesting alterations to workflow in order to improve efficiency and success. Pitching ideas for projects based on gaps in the market and technological advancements. Automate, secure, and scale our infrastructure with modern CI/CD, container orchestration, and observability tools.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France