Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8 - 12 years
11 - 15 Lacs
Hyderabad
Work from Office
Responsibilities: Design and develop our next generation of RESTful APIs and Event driven services in a distributed environment. Be hands-on in the design and development of robust solutions to hard problems, while considering scale, security, reliability, and cost Support other product delivery partners in the successful build, test, and release of solutions. Work with distributed requirements and technical stakeholders to complete shared design and development. Support the full software lifecycle of design, development, testing, and support for technical delivery. Works with both onsite (Scrum Master, Product, QA and Developers) and offshore QA team members in properly defining testable scenarios based on requirements/acceptance criteria. Be part of a fast-moving team, working with the latest tools and open-source technologies Work on a development team using agile methodologies. Understand the Business and the Application Architecture End to End Solve problems by crafting software solutions using maintainable and modular code. Participate in daily team standup meetings where you'll give and receive updates on the current backlog and challenges. Participate in code reviews. Ensure Code Quality and Deliverables Provide Impact analysis for new requirements or changes. Responsible for low level design with the team Qualifications: Required Skills: Technology Stack: Java Spring Boot, GitHub, OpenShift, Kafka, MongoDB, AWS, Serverless, Lambda, OpenSearch Hands on experience with Java 1.8 or higher, Java, Spring Boot, OpenShift, Docker, Jenkins Solid understanding of OOP, Design Patterns and Data Structures Experience in building REST APIs/Microservices Strong experience in frontend skills like React JS/Angular JS Strong understanding of parallel processing, concurrency and asynchronous concepts Experience with NoSQL databases like MongoDB, PostgreSQL Proficient in working with the SAM (Serverless Application Model) framework, with a strong command of Lambda functions using Java. Proficient in internal integration within AWS ecosystem using Lambda functions, leveraging services such as Event Bridge, S3, SQS, SNS, and others. Must have experience in Apache Spark. Experienced in internal integration within AWS using DynamoDB with Lambda functions, demonstrating the ability to architect and implement robust serverless applications. CI/CD experience: must have GitHub experience. Recognized internally as the go-to person for the most complex software engineering assignments Required Experience & Education: 11-13 years of experience Experience with vendor management in an onshore/offshore model. Proven experience with architecture, design, and development of large-scale enterprise application solutions. College degree (Bachelor) in related technical/business areas or equivalent work experience. Industry certifications such as PMP, Scrum Master, or Six Sigma Green Belt
Posted 2 months ago
3 - 5 years
8 - 14 Lacs
Delhi NCR, Mumbai, Bengaluru
Hybrid
Responsibilities :- Collaborate with stakeholders to understand business requirements and data needs, and translate them into scalable and efficient data engineering solutions using AWS Data Services.- Design, develop, and maintain data pipelines using AWS serverless technologies such as- Glue, S3, Lambda, DynamoDB, Athena, and RedShift.- Implement data modeling techniques to optimize data storage and retrieval processes.- Develop and deploy data processing and transformation frameworks to support both real- time and batch processing requirements.- Ensure data pipelines are scalable, reliable, and performant to handle large-scale data sizes.- Implement data documentation and observability tools and practices to monitor . Hands on experience of Spark, Scala and conversant with SQL (Scala +AWS is mandatory)Good knowledge on Hadoop (Oozie)Reverse engineer the SQL queries, Scala code to understand functionalityCapable of identifying, analysing and interpret patterns and trends in complex data sets5. Should have strong experience on AWS (EMR, S3)Has worked on creating database design, data models & techniques for data mining. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 2 months ago
7 - 12 years
15 - 30 Lacs
Gurgaon
Remote
Minimum 6 years of hands-on experience deploying, enhancing, and troubleshooting foundational AWS Services (EC2, S3, RDS, VPC, Cloud-Trail, Cloud-Front, Lambda, EKS, ECS, etc.) 3+ years of experience with Serverless technologies, services, and container technologies (Docker, Kubernetes, etc.) Manage Kubernetes charts using helm. Managed production application deployments in Kubernetes cluster using KubeCTL. Expertise in deploying distributed apps with containers (Docker) & orchestration (Kubernetes EKS,). Experience in infrastructure-as-code tools for provisioning and managing Kubernetes infrastructure. (Preferred) Certification in container orchestration systems and/or Certified Kubernetes Administrator. Experience with Log Management and Analytics tools such as Splunk / ELK 3+ years of experience with writing, debugging, and enhancing Terraform to write infrastructure as code to create scrips for EKS, EC2, S3, and other AWS services. Expertise with working with Terraform Key features such as Infrastructure as code, execution plans, resource graphs, and change automation. Implemented cluster services using Kubernetes and docker to manage local deployments in Kubernetes by building self-hosted Kubernetes clusters using Terraform. Managed provisioning of AWS infrastructure using Terraform. Develop and maintain infrastructure-as-code solutions using Terraform. Ability to write scripts in JavaScript, Bash, Python, Typescript, or similar languages. Able to work independently and as a team to architect and implement new solutions and technologies. Very strong written and verbal communication skills; the ability to communicate verbally and in writing with all levels of employees and management, capable of successful formal and informal communication, speaks and writes clearly and understandably at the right level. Ability to identify, evaluate, learn, and POC new technologies for implementation. Experience in designing and implementing highly resilient AWS solutions. Experience with Windows Server, IIS, Docker/Kubernetes Strong understanding of systems, networks and troubleshooting techniques. Experience in automated build pipeline, and continuous integration. Source control, branching, & merging: git/svn/etc (Repository Management)
Posted 2 months ago
8 - 12 years
20 - 30 Lacs
Hyderabad
Work from Office
Design and development of cloud-hosted web applications for insurance industry from high-level architecture and network infrastructure to low-level creation of site layout, user experience, database schema, data structure, work-flows, graphics, unit testing, an end to end integration testing, etc. Working from static application mock-ups and wireframes, developing front-end user interfaces and page templates in HTML 5, CSS, SAAS, LESS TypeScript, bootstrap, Angular and third-party controls like Kendo UI/Infragistics. Proficiency in AWS services like Lambda, EC2, S3, and IAM for deploying and managing applications. Excellent programming skills in Python with the ability to develop, maintain, and debug Python-based applications. Develop, maintain, and debug applications using .NET Core and C#. Stay up to date with the latest industry trends and technologies related to PostgreSQL, AWS, and Python. Design and implement risk management business functionality and in-database analytics. Identify complex data problems and review related information to develop and evaluate options and design and implement solutions. Design and develop functional and responsive web applications by collaborating with other engineers in the Agile team. Develop REST API and understand WCF Services. Prepare documentations and specifications.
Posted 2 months ago
2 - 3 years
6 - 10 Lacs
Bengaluru
Work from Office
We are looking for an enthusiastic RPA & Intelligent Automation professional to join the ranks of our newly founded CoE. Help us meet increasing demand from the business, support our rapidly growing portfolio of automation and make an impact across every business area at Booking.com. We look at our team as a service provider for the entire company, operating with a large degree of autonomy and enterpreneurship . B.Responsible Naturally oriented towards improving efficiencies. Seeking accountability from themselves and others. Compassionate collaborator with a deep sense of comradery. Willingness to be cross-functional, pick up new skills and cover new ground with/for the team. Striving for continuous improvement and high quality in their work. Strong work ethic and high spirit. Keen to understand and solve real world problems through technology. B.Skilled 2-3 years of experience developing in Blue Prism CS, Engineering or similar university background is a MUST HAVE Blue Prism certification is a MUST HAVE Knowledge of Blue Prisms architectural/infrastructure components Proficiency in core Python libraries like pandas,NumPy etc Exposure to AI/ML frameworks like Tensorflow,PyTorch,scikit learn etc Understanding of NLP techniques like text summarization,sentiment analysis,Named entity recognition etc is good to have In-depth understanding of AWS components RDS, EC2, S3, IAM, CloudWatch, Lambda,Sagemaker, VPC is good to have Experience with VAULT, PASSPORT, Gitlab for UAM / Config Management Exposure to Terraform code for deploying AWS services is good to have Professional experience with SQL, .NET, C#, HTTP APIs and Web Services Experience designing, developing, deploying and maintaining software Experience working in a scrum/agile environment Excellent communication skills in English
Posted 2 months ago
5 - 7 years
9 - 12 Lacs
Bengaluru
Work from Office
We are looking for an enthusiastic RPA & Intelligent Automation professional to join the ranks of our newly founded CoE. Help us meet increasing demand from the business, support our rapidly growing portfolio of automation and make an impact across every business area at Booking.com. We look at our team as a service provider for the entire company, operating with a large degree of autonomy and enterpreneurship . B.Responsible Naturally oriented towards improving efficiencies. Seeking accountability from themselves and others. Compassionate collaborator with a deep sense of comradery. Willingness to be cross-functional, pick up new skills and cover new ground with/for the team. Striving for continuous improvement and high quality in their work. Strong work ethic and high spirit. Keen to understand and solve real world problems through technology. B.Skilled 5+ years of experience developing in Blue Prism CS, Engineering or similar university background is a MUST HAVE Blue Prism certification is a MUST HAVE Knowledge of Blue Prisms architectural/infrastructure components Proficiency in core Python libraries like pandas,NumPy etc Exposure to AI/ML frameworks like Tensorflow,PyTorch,scikit learn etc Understanding of NLP techniques like text summarization,sentiment analysis,Named entity recognition etc is good to have In-depth understanding of AWS components RDS, EC2, S3, IAM, CloudWatch, Lambda,Sagemaker, VPC is good to have Experience with VAULT, PASSPORT, Gitlab for UAM / Config Management Exposure to Terraform code for deploying AWS services is good to have Professional experience with SQL, .NET, C#, HTTP APIs and Web Services Experience designing, developing, deploying and maintaining software Experience working in a scrum/agile environment Excellent communication skills in English In return, well provide: Be part of a fast paced environment and performance driven culture Various opportunities to grow technically and personally via side projects, hackathons, conferences and your involvement in the community; Contributing to a high scale, complex, world renowned product and seeing real time impact of your work on millions of travelers worldwide; Opportunity to utilize technical expertise, leadership capabilities and entrepreneurial spirit; Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects.
Posted 2 months ago
8 - 10 years
25 - 30 Lacs
Hyderabad
Work from Office
Position Summary: Data engineer on the Data integration team Job Description & Responsibilities: Work with business and technical leadership to understand requirements. Design to the requirements and document the designs Ability to write product-grade performant code for data extraction, transformations and loading using Spark, Py-Spark Do data modeling as needed for the requirements. Write performant queries using Teradata SQL, Hive SQL and Spark SQL against Teradata and Hive Implementing dev-ops pipelines to deploy code artifacts on to the designated platform/servers like AWS. Troubleshooting the issues, providing effective solutions and jobs monitoring in the production environment Participate in sprint planning sessions, refinement/story-grooming sessions, daily scrums, demos and retrospectives. Experience Required: Overall 8-10 years of experience Experience Desired: Strong development experience in Spark, Py-Spark, Shell scripting, Teradata. Strong experience in writing complex and effective SQLs (using Teradata SQL, Hive SQL and Spark SQL) and Stored Procedures Health care domain knowledge is a plus Primary Skills: Excellent work experience on Databricks as Data Lake implementations Experience in Agile and working knowledge on DevOps tools (Git, Jenkins, Artifactory) AWS (S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and CloudWatch) Databricks (Delta lake, Notebooks, Pipelines, cluster management, Azure/AWS integration Additional Skills: Experience in Jira and Confluence Exercises considerable creativity, foresight, and judgment in conceiving, planning, and delivering initiatives.
Posted 2 months ago
8 - 12 years
8 - 12 Lacs
Mumbai
Work from Office
About IndusInd: With over 2.5 crore customers, over 5,000 distribution points and nearly 2,000 branches, IndusInd Bank is a universal bank with a widespread banking footprint across the country. IndusInd offers a wide array of products and services for individuals and corporates including microfinance, personal loans, personal and commercial vehicles loans, credit cards, SME loans. Over the years, IndusInd has grown ceaselessly and dynamically, driven by zeal to offer our customers banking services at par with the highest quality standards in the industry. IndusInd is a pioneer in digital first solutions to bring together the power of next-gen digital product stack, customer excellence and trust of an established bank Skillset you should possess: Proven experience of 8+ years in engineering and software architecture design and development in banking domain using latest technology stack Strong understanding of Java spring boot, microservices, OOPs concept Hands on experience on various stacks, like AWS/Azure cloud components (such as API gateway, Lambda, ECS, Fargate, ELK and so on), Middle ware, back end stacks (such as Spring boot 2.3.3 ) Hands on experience on Mobile hybrid technologies (such as Android Studio, X code, Javascripts), React native , Node.js Hands on experience on front end layer (such as Angular 10, Cordova 10) Experience with event-driven applications using queues, service bus and other related patterns Experience in using monitoring tool to analyse and identify the issues Sound knowledge of various operating systems and databases, SQL, No-Sql Full stack developer for enterprise applications Experience in designing platforms/application for large enterprise level scale and performance requirements Excellent communication skills and ability to collaborate in a multi-disciplinary team which includes Software Engineers and Software Development Managers Responsibilities Technical Lead responsible for the development and deployment of mobile application platform Solve complex performance problems and architectural challenges Troubleshoot, test and maintain the core product software and databases to ensure strong optimization and functionality Help team in technical challenges, code review and drive automated deployment Technical documentations Should be able to understand the requirements and develop mobile/web application Ability to work on multiple requirements at the same time and complete tasks in a timely manner Work closely with product manager to release features and additions Selection Process: Interested Candidates are mandatorily required to apply through this listing on Jigya. Only applications received through this posting will be evaluated further. Shortlisted candidates will appear in an Online Assessment administered by Jigya on behalf on IndusInd Bank Candidates selected after the screening test will be interviewed by IndusInd Bank
Posted 2 months ago
13 - 16 years
15 - 18 Lacs
Hyderabad
Work from Office
Responsibilities: This role provides counsel and advice to management on significant Infrastructure matters, often requiring coordination between organizations. Serves as the Sr. Advisor responsible for managing database structure for Big Data of information technology solutions. Leads the analysis and implementation of engineering infrastructure solutions of projects and/or work requests for complex business solutions. Other key responsibilities include: Provide support for Big Data Databricks, Snowflake, etc. Services. Implement and maintain Databricks platform on AWS & Azure. Experience in managing Databricks account administration, workspace administration, Cluster Policies and Unity Catalog. Implement and maintain Snowflake platform on AWS & Azure. Experience in monitoring, performance tuning and cost optimization. Strong experience in AWS technologies S3, EC2, EKS, ECS, Lambda, Route 53, EMR. CloudWatch, Cloud trail & KMS experience Implement, maintain, and optimize CI/CD & Infrastructure as Code pipelines along with experience in Jenkins, GitHub & Terraform Proficient with languages like SQL, Python, PySpark and GO Experience in IT service management Incident Management, Problem Management, Change Management, ITIL Experience in working with vendor support to resolve technical issues. Qualifications Required Skills: Proven experience with Databricks and Snowflake platform on AWS & Azure Solid grasp of S3, EC2, EKS, ECS, Lambda, Route 53, EMR. CloudWatch, Cloud trail & KMS experience Strong troubleshooting skills to identify and resolve issues efficiently. Excellent teamwork and communication skills, enabling effective collaboration with cross-functional teams. Prior experience required in IT Service Management - Incident Management, Problem Management, Change Management, ITIL Strong vendor management skills and performance turning required. Required Experience & Education: 13 to 16 years of experience Bachelors degree or better preferred
Posted 2 months ago
8 - 12 years
27 - 32 Lacs
Hyderabad
Work from Office
We are looking for exceptional software engineers / developers in our PBM Plus Technology organization. This role requires a Java Developer who has experience developing RESTful, Microservices and deploying in on prem and / or AWS infrastructure using the technologies listed below. They are expected to work closely with Subject Matter Experts, developers, and business stakeholders to ensure that application solutions meet business / customer requirements. Responsibilities: Design and develop our next generation of RESTful APIs and Event driven services in a distributed environment. Be hands-on in the design and development of robust solutions to hard problems, while considering scale, security, reliability, and cost Support other product delivery partners in the successful build, test, and release of solutions. Work with distributed requirements and technical stakeholders to complete shared design and development. Support the full software lifecycle of design, development, testing, and support for technical delivery. Works with both onsite (Scrum Master, Product, QA and Developers) and offshore QA team members in properly defining testable scenarios based on requirements/acceptance criteria. Be part of a fast-moving team, working with the latest tools and open-source technologies Work on a development team using agile methodologies. Understand the Business and the Application Architecture End to End Solve problems by crafting software solutions using maintainable and modular code. Participate in daily team standup meetings where you'll give and receive updates on the current backlog and challenges. Participate in code reviews. Ensure Code Quality and Deliverables Provide Impact analysis for new requirements or changes. Responsible for low level design with the team Qualifications Required Skills: Technology Stack: Java Spring Boot, GitHub, OpenShift, Kafka, MongoDB, AWS, Serverless, Lambda, OpenSearch Hands on experience with Java 1.8 or higher, Java, Spring Boot, OpenShift, Docker, Jenkins Solid understanding of OOP, Design Patterns and Data Structures Experience in building REST APIs / Microservices Experience in clould technology like AWS. Strong understanding of parallel processing, concurrency and asynchronous concepts Experience with NoSQL databases like MongoDB, PostgreSQL Proficient in working with the SAM (Serverless Application Model) framework, with a strong command of Lambda functions using Java. Proficient in internal integration within AWS ecosystem using Lambda functions, leveraging services such as Event Bridge, S3, SQS, SNS, and others. Experienced in internal integration within AWS using DynamoDB with Lambda functions, demonstrating the ability to architect and implement robust serverless applications. CI/CD experience: must have GitHub experience. Recognized internally as the go-to person for the most complex software engineering assignments Required Experience & Education: 8+ years of experience Experience with vendor management in an onshore/offshore model. Proven experience with architecture, design, and development of large-scale enterprise application solutions. College degree (Bachelor) in related technical/business areas or equivalent work experience. Industry certifications such as PMP, Scrum Master, or Six Sigma Green Belt
Posted 2 months ago
5 - 10 years
50 - 55 Lacs
Bengaluru
Hybrid
WHAT YOULL DO As a member of the team, you will be responsible for developing, testing, and deploying data-driven software products in AWS Cloud. You will work with a team consisting of a data scientist, an enterprise architect, data engineers and business users to enhance products feature sets. Qualification & Skills: Mandatory: Knowledge and experience in audit, expense and firm level data elements Knowledge and experience in audit and compliance products/processes End-to-end product development lifecycle knowledge/exposure Strong in AWS Cloud and associated services like Elastic Beanstalk, SageMaker, EFS, S3, IAM, Glue, Lambda, SQS, SNS, KMS, Encryption, Secret Manager Strong experience in Snowflake Database Operations Strong in SQL and Python Programming Language Strong experience in Web Development Framework (Django) Strong experience in React and associated framework (Next JS, Tailwind etc) Experience in CI/CD pipelines and DevOps methodology Experience in SonarQube integration and best practices Implementation of best security implementation for web applications and cloud infrastructure Knowledge of Wiz.io for security protocols related to AWS Cloud Platform Nice to Have: Knowledge of data architecture, data modeling, best practices and security policies in the Data Management space Basic data science knowledge preferred Experience in KNIME/Tableau/PowerBI Experience & Education: Between 5 to 15 years of IT experience Bachelors/masters degree from an accredited college/university in business related or technology related field
Posted 2 months ago
3 - 6 years
5 - 8 Lacs
Bengaluru
Work from Office
Your Job The Enterprise Finance Applications Team at Koch Global Services (KGS) is seeking a Product Analyst to join our product team to support leveraged, enterprise Order to Cash (OTC) Electronic Invoicing (e-Invoicing) capabilities and projects. This role will focus on both product and project work to build and maintain the e-Invoicing product. As a Product Analyst, you will be responsible for supporting the e-Invoicing application and related technologies. You will also provide support for new application implementations/projects, identify consumer experience process enhancements, and identify ways to improve and automate processes Our Team The Enterprise Finance Applications Team at Koch Global Services (KGS) is seeking a Product Analyst to join our product team to support leveraged, enterprise Order to Cash (OTC) Electronic Invoicing (e-Invoicing) capabilities and projects. This role will focus on both product and project work to build and maintain the e-Invoicing product. As a Product Analyst, you will be responsible for supporting the e-Invoicing application and related technologies. You will also provide support for new application implementations/projects, identify consumer experience process enhancements, and identify ways to improve and automate processes What You Will Do Develop detailed knowledge and understanding of the e-Invoicing business process. Support the e-Invoicing global user base. Expectations include owning the life cycle of submitted tickets, troubleshooting, and resolving issues by using technical/critical thinking skills, monitoring/maintaining interconnected systems, and identifying root causes and support trends to reduce/improve/prevent future user disruption. Responsible for administrative needs for the overall system including security, users, configuration, integrations, and master reference data. Responsible for understanding impacts of releases, patches, integrations, and system updates; Test and validate new system releases and functionality, communicating effectively with customers on changes, and creating training materials and documentation when applicable. Actively support business partners throughout the project lifecycle to onboard e-Invoicing in new countries. This includes eliciting and validating business requirements, supporting development efforts, and testing new integrations. Connect with the OTC team and stakeholders to develop and maintain strong working relationships, optimize value, and internalize business. Actively listen and anticipate customers future needs and incorporate into team strategies and priorities. Use analytical experience, specifically around having a curiosity mindset and asking good questions to dig deep into business processes to define solutions. Have courage to challenge, escalate, embrace teamwork, work independently, be inclusive with differing cultures/opinions, be reliable and trustworthy. Who You Are (Basic Qualifications) 3+ Yrs of knowledge of AWS (S3, Cognito, Lambda, Dynamo DB), API integrations, and data lakes/data products. Ability to work with global end users and stakeholders by answering questions, supporting production users, and resolving production issues. Experience engaging with customers through soliciting, capturing, and critically thinking through requirements. Experience using data to understand business needs, efficiently convert requests into actionable assignments, and present solutions to stakeholders. Experience managing, prioritizing, and working simultaneously across multiple issues and concurrent projects. Experience providing reasonable customer communications about delivery dates, updates and managing expectations. What Will Put You Ahead Basic experience with financial or accounting applications. Experience implementing and/or supporting enterprise ERP or other large-scale projects, specifically in the accounting/finance/Accounts Receivable area). Experience partnering with global end users. Experience developing training programs and materials for users of enhanced systems and products. Experience influencing and challenging peers and stakeholders to drive to common goals and solutions. Experience working in Agile based methodology (Kanban, SCRUM, ServiceNow).
Posted 2 months ago
6 - 10 years
8 - 12 Lacs
Karnataka
Work from Office
Description: Experience: 6+ Years Primary skills: Java, Spring, AWS, PL/SQL (Need hands on experience) Secondary skills: Angular, Node.JS, AWS CDK, Maven, CI/CD, Gitlab Detailed JD: Looking for full-stack developer with T-shaped skills with deep expertise in backend. -Java: Should be able to develop and document secured APIs. JDK 8.0 or greater Spring Boot framework JPA or similar ORM framework like Hibernate etc. -AWS: Experience in developing cloud native applications using Lambda, S3, API Gateway, NoSQL (DynamoDB), load balancers, SNS, SQS, ECS -Angular: Experience in Angular, NodeJS and TypeScript -Experience with Infrastructure as Code (CDK preferred). -Expertise in securing APIs with OAuth2.0 -Expertise with observability frameworks such as Dynatrace, Splunk, etc. -Expertise in DevOps capabilities such as Gitlab, Jenkins Ability to write CI/CD pipelines is preferred. Familiarity with build tools such as Maven and Webpack -Experience in MPA frameworks like Struts, Spring MVC is a plus. Amazon Web Services (AWS)-Less Than a Year,Developer / Software Engineer-One to Three Years,Java-Less Than a Year,Oracle DB PL/SQL-Less Than a Year
Posted 2 months ago
7 - 12 years
22 - 27 Lacs
Hyderabad
Work from Office
Key Responsibilities Data Pipeline Development: Design, develop, and optimize robust data pipelines to efficiently collect, process, and store large-scale datasets for AI/ML applications. ETL Processes: Develop and maintain Extract, Transform, and Load (ETL) processes to ensure accurate and timely data delivery for machine learning models. Data Integration: Integrate diverse data sources (structured, unstructured, and semi-structured data) into a unified and scalable data architecture. Data Warehousing & Management: Design and manage data warehouses to store processed and raw data in a highly structured, accessible format for analytics and AI/ML models. AI/ML Model Development: Collaborate with Data Scientists to build, fine-tune, and deploy machine learning models into production environments. Focus on model optimization, scalability, and operationalization. Automation: Implement automation techniques to support model retraining, monitoring, and reporting. Cloud & Distributed Systems: Work with cloud platforms (AWS, Azure, GCP) and distributed systems to store and process data efficiently, ensuring that AI/ML models are scalable and maintainable in the cloud environment. Data Quality & Governance: Implement data quality checks, monitoring, and governance frameworks to ensure the integrity and security of the data being used for AI/ML models. Collaboration: Work cross-functionally with Data Science, Business Intelligence, and other engineering teams to meet organizational data needs and ensure seamless integration with analytics platforms. Required Skills and Qualifications Bachelor's or Masters Degree in Computer Science, Engineering, Data Science, or a related field. Strong proficiency in Python for AI/ML and data engineering tasks. Experience with AI/ML frameworks such as TensorFlow, PyTorch, Scikit-learn, and Keras. Proficient in SQL and working with relational databases (e.g., MySQL, PostgreSQL, SQL Server). Strong experience with ETL pipelines and data wrangling in large datasets. Familiarity with cloud-based data engineering tools and services (e.g., AWS (S3, Lambda, Redshift), Azure, GCP). Solid understanding of big data technologies like Hadoop, Spark, and Kafka for data processing at scale. Experience in managing and processing both structured and unstructured data. Knowledge of version control systems (e.g., Git) and agile development methodologies. Experience with data containers and orchestration tools such as Docker and Kubernetes. Strong communication skills to collaborate effectively with cross-functional teams. Preferred Skills Experience with Data Warehouses (e.g., Amazon Redshift, Google BigQuery, Snowflake). Familiarity with CI/CD pipelines for ML model deployment and automation. Familiarity with machine learning model monitoring and performance optimization. Experience with data visualization tools like Tableau, Power BI, or Plotly. Knowledge of deep learning models and frameworks. DevOps or MLOps experience for automating deployment of models. Advanced statistics or math background for improving model performance and accuracy.
Posted 2 months ago
5 - 10 years
6 - 10 Lacs
Kolkata
Work from Office
Seeking an AWS-certified professional with expertise in cloud platforms, serverless architecture, monitoring, and highly available systems to manage, optimize, and secure AWS infrastructure while leading and mentoring teams. Key Skills: - AWS Services: IAM, EC2, VPC, ELB/ALB, Auto Scaling, Lambda - AWS Managed Products: EKS, ECS, ECR, Route 53, SES, ElastiCache, RDS, Redshift - Cloud Platforms: Expertise in AWS infrastructure and services - Serverless Development Architecture - Operating Systems: Linux - Monitoring and Alerting: Implementing and improving monitoring stacks - Security: SSH, cloud connectivity, and security protocols - System Reliability: High availability, production systems, and configuration management - Automation and Scripting: Installing and enhancing scripts - Team Leadership: Mentoring and guiding teams on new technologies - Certifications: AWS Certified Solutions Architect, Developer, DevOps Engineer, SysOps Administrator
Posted 2 months ago
6 - 7 years
18 - 25 Lacs
Bengaluru
Work from Office
The ideal candidate for this position should have 5+ years of experience.Development in NodeJS and TypeScript AWS, Lambda, DynamoDB, Serverless, EventBridge, S3, Terraform, GraphQL. Responsibilities: Design, develop, and maintain server-side applications using Typescript and Node.js. Write clean, scalable, and well-documented code. Collaborate with cross-functional teams to define, design, and ship new features. Implement and maintain APIs for integration with front-end applications and external services. Optimize application performance, reliability, and scalability. Conduct code reviews and provide constructive feedback to team members. Troubleshoot and debug issues, and provide timely resolutions. Stay updated with the latest technologies and best practices in software development.
Posted 2 months ago
5 - 8 years
3 - 7 Lacs
Bengaluru, Hyderabad
Work from Office
Key Responsibilities: Design, implement, and maintain cloud-based infrastructure on AWS. Manage and monitor AWS services, including EC2, S3, Lambda, RDS, CloudFormation, VPC, etc. Develop automation scripts for deployment, monitoring, and scaling using AWS services. Collaborate with DevOps teams to automate build, test, and deployment pipelines. Ensure the security and compliance of cloud environments using AWS security best practices. Optimize cloud resource usage to reduce costs while maintaining high performance. Troubleshoot issues related to cloud infrastructure and services. Participate in capacity planning and disaster recovery strategies. Monitor application performance and make necessary adjustments to ensure optimal performance. Stay current with new AWS features and tools and evaluate their applicability for the organization. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as an AWS Engineer or in a similar cloud infrastructure role. In-depth knowledge of AWS services, including EC2, S3, RDS, Lambda, VPC, CloudWatch, etc. Proficiency in scripting languages such as Python, Shell, or Bash. Experience with infrastructure-as-code tools like Terraform or AWS CloudFormation. Strong understanding of networking concepts, cloud security, and best practices. Familiarity with containerization technologies (e.g., Docker, Kubernetes) is a plus. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication skills, both written and verbal. AWS certifications (AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are preferred. Preferred Skills: Experience with serverless architectures and services. Knowledge of CI/CD pipelines and DevOps methodologies. Experience with monitoring and logging tools like CloudWatch, Datadog, or Prometheus. Knowledge in AWS FinOps.
Posted 2 months ago
5 - 7 years
3 - 7 Lacs
Bengaluru, Hyderabad
Work from Office
Key Responsibilities: Design, implement, and maintain cloud-based infrastructure on AWS. Manage and monitor AWS services, including EC2, S3, Lambda, RDS, CloudFormation, VPC, etc. Develop automation scripts for deployment, monitoring, and scaling using AWS services. Collaborate with DevOps teams to automate build, test, and deployment pipelines. Ensure the security and compliance of cloud environments using AWS security best practices. Optimize cloud resource usage to reduce costs while maintaining high performance. Troubleshoot issues related to cloud infrastructure and services. Participate in capacity planning and disaster recovery strategies. Monitor application performance and make necessary adjustments to ensure optimal performance. Stay current with new AWS features and tools and evaluate their applicability for the organization. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as an AWS Engineer or in a similar cloud infrastructure role. In-depth knowledge of AWS services, including EC2, S3, RDS, Lambda, VPC, CloudWatch, etc. Proficiency in scripting languages such as Python, Shell, or Bash. Experience with infrastructure-as-code tools like Terraform or AWS CloudFormation. Strong understanding of networking concepts, cloud security, and best practices. Familiarity with containerization technologies (e.g., Docker, Kubernetes) is a plus. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication skills, both written and verbal. AWS certifications (AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are preferred. Preferred Skills: Experience with serverless architectures and services. Knowledge of CI/CD pipelines and DevOps methodologies. Experience with monitoring and logging tools like CloudWatch, Datadog, or Prometheus. Knowledge in AWS FinOps
Posted 2 months ago
7 - 10 years
15 - 20 Lacs
Pune, Bengaluru, Hyderabad
Hybrid
Role & responsibilities IVR Knowledge: Experience with interactive voice response systems. Amazon Connect, Lambda, Lex, and Polly: Proficiency in these tools is essential. Programming and Telephony Knowledge: A solid foundation in programming, along with telephony systems expertise. Experience with node.js as preferred Conversational AI: Skills in natural language processing and speech analytics are highly valuable. NET Preferred (Not Required): While .NET experience is preferred, it is not mandatory. Preferred candidate profile Perks and benefits
Posted 2 months ago
11 - 14 years
37 - 40 Lacs
Hyderabad
Work from Office
We are looking for exceptional software engineers/developers in our PBM Plus Technology organization. This role requires a Java Developer who has experience developing RESTful, Microservices and deploying in on prem and / or AWS infrastructure using the technologies listed below. They are expected to work closely with Subject Matter Experts, developers, and business stakeholders to ensure that application solutions meet business / customer requirements. Responsibilities: Design and develop our next generation of RESTful APIs and Event driven services in a distributed environment. Be hands-on in the design and development of robust solutions to hard problems, while considering scale, security, reliability, and cost Support other product delivery partners in the successful build, test, and release of solutions. Work with distributed requirements and technical stakeholders to complete shared design and development. Support the full software lifecycle of design, development, testing, and support for technical delivery. Works with both onsite (Scrum Master, Product, QA and Developers) and offshore QA team members in properly defining testable scenarios based on requirements/acceptance criteria. Be part of a fast-moving team, working with the latest tools and open-source technologies Work on a development team using agile methodologies. Understand the Business and the Application Architecture End to End Solve problems by crafting software solutions using maintainable and modular code. Participate in daily team standup meetings where you'll give and receive updates on the current backlog and challenges. Participate in code reviews. Ensure Code Quality and Deliverables Provide Impact analysis for new requirements or changes. Responsible for low level design with the team Qualifications Required Skills: Technology Stack: Java, Spring Boot , GitHub, OpenShift, Kafka, MongoDB, AWS, Serverless, Lambda, - Good to have Hands on experience with Java 1.8 or higher, Java, Spring Boot, OpenShift, Docker, Jenkins Solid understanding of OOP, Design Patterns and Data Structures Experience in building REST APIs/Microservices Strong understanding of parallel processing, concurrency and asynchronous concepts Experience with NoSQL databases like MongoDB, PostgreSQL (2-3 yrs)| relational and documentational DBs) Proficient in working with the SAM (Serverless Application Model) framework, with a strong command of Lambda functions using Java. Proficient in internal integration within AWS ecosystem using Lambda functions, leveraging services such as Event Bridge, S3, SQS, SNS, and others. Experienced in internal integration within AWS using DynamoDB with Lambda functions, demonstrating the ability to architect and implement robust serverless applications. CI/CD experience: must have GitHub experience. Recognized internally as the go-to person for the most complex software engineering assignments Required Experience & Education: 11+ years of experience Experience with vendor management in an onshore/offshore model. Proven experience with architecture, design, and development of large-scale enterprise application solutions. College degree (Bachelor) in related technical/business areas or equivalent work experience. Industry certifications such as PMP, Scrum Master, or Six Sigma Green Belt
Posted 2 months ago
3 - 5 years
5 - 8 Lacs
Hyderabad
Work from Office
Position Summary: Data Engineering Senior Analyst demonstrates expertise in data engineering technologies with the focus on engineering, innovation, strategic influence and product mindset. This individual will act as a key contributor of the team to design, build, test and deliver large-scale software applications, systems, platforms, services or technologies in the data engineering space. This individual will have the opportunity to work directly with partner IT and business teams, owning and driving major deliverables across all aspects of software delivery. The candidate will play a key role in automating the processes on Databricks and AWS. They collaborate with business and technology partners in gathering requirements, develop and implement. The individual must have strong analytical and technical skills coupled with the ability to positively influence the delivery of data engineering products. The applicant will be working in a team that demands innovation, cloud-first, self-service-first, and automation-first mindset coupled with technical excellence. The applicant will be working with internal and external stakeholders and customers for building solutions as part of Enterprise Data Engineering and will need to demonstrate very strong technical and communication skills. Delivery Intermediate delivery skills including the ability to deliver work at a steady, predictable pace to achieve commitments, decompose work assignments into small batch releases and contribute to tradeoff and negotiation discussions. Domain Expertise Demonstrated track record of domain expertise including the ability to understand technical concepts necessary to do the job effectively, demonstrate willingness, cooperation, and concern for business issues and possess in-depth knowledge of immediate systems worked on. Problem Solving Proven problem solving skills including debugging skills, allowing you to determine source of issues in unfamiliar code or systems and the ability to recognize and solve repetitive problems rather than working around them, recognize mistakes using them as learning opportunities and break down large problems into smaller, more manageable ones Job Description & Responsibilities: The candidate will be responsible to deliver business needs end to end from requirements to development into production. Through hands-on engineering approach in Databricks environment, this individual will deliver data engineering toolchains, platform capabilities and reusable patterns. The applicant will be responsible to follow software engineering best practices with an automation first approach and continuous learning and improvement mindset. The applicant will ensure adherence to enterprise architecture direction and architectural standards. The applicant should be able to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Experience Required: 3 to 5 years of experience in software engineering, building data engineering pipelines, middleware and API development and automation More than 3 years of experience in Databricks within an AWS environment Data Engineering experience Experience Desired: Expertise in Agile software development principles and patterns Expertise in building streaming, batch and event-driven architectures and data pipelines Primary Skills: Cloud-based security principles and protocols like OAuth2, JWT, data encryption, hashing data, secret management, etc. Expertise in Big data technologies such as Spark, Hadoop, Databricks, Snowflake, EMR, Glue Good understanding of Kafka, Kafka Streams, Spark Structured streaming, configuration-driven data transformation and curation Expertise in building cloud-native microservices, containers, Kubernetes and platform-as-a-service technologies such as OpenShift, CloudFoundry Experience in multi-cloud software-as-a-service products such as Databricks, Snowflake Experience in Infrastructure-as-Code (IaC) tools such as terraform and AWS cloudformation Experience in messaging systems such as Apache ActiveMQ, WebSphere MQ, Apache Artemis, Kafka, AWS SNS Experience in API and microservices stack such as Spring Boot, Quarkus, Expertise in Cloud technologies such as AWS Glue, Lambda, S3, Elastic Search, API Gateway, CloudFront Experience with one or more of the following programming and scripting languages Python, Scala, JVM-based languages, or JavaScript, and ability to pick up new languages Experience in building CI/CD pipelines using Jenkins, Github Actions Strong expertise with source code management and its best practices Proficient in self-testing of applications, unit testing and use of mock frameworks, test-driven development (TDD) Knowledge on Behavioral Driven Development (BDD) approach Additional Skills: Ability to perform detailed analysis of business problems and technical environments Strong oral and written communication skills Ability to think strategically, implement iteratively and estimate financial impact of design/architecture alternatives Continuous focus on an on-going learning and development
Posted 2 months ago
3 - 5 years
6 - 8 Lacs
Hyderabad
Work from Office
Position Summary: Cigna, a leading Health Services company, is looking for an exceptional engineer in our Data & Analytics Engineering organization. The Full Stack Engineer is responsible for the delivery of a business need starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the job and add value to the business. Critical attributes of being a Full Stack Engineer, among others, is ownership, eagerness to learn & an open mindset. In addition to Delivery, the Full Stack Engineer should have an automation first and continuous improvement mindset. Person should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes. Job Description & Responsibilities: Design and architect the solution independently Take ownership and accountability Write referenceable & modular code Be fluent in particular areas and have proficiency in many areas Have a passion to learn Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact Take risks and champion new ideas Experience Required: 3+ years of experience required in listed skills in Data Engineering role 3+ years of Python scripting experience 3+ years of Data Management & SQL expertise Teradata & Snowflake experience strongly preferred 3+ years being part of Agile teams Scrum Experience Desired: Experience with version management tools Git preferred Experience with BDD and TDD development methodologies Experience working in an agile CI/CD environments; Jenkins experience preferred Knowledge and/or experience with Health care information domains preferred Education and Training Required: Bachelors degree (or equivalent) required Primary Skills: Expertise with big data technologies - Hadoop, HiveQL, Spark (Scala/Python) Expertise on Cloud technologies AWS (S3, Glue, Terraform, Lambda, Aurora, Redshift, EMR) Additional Skills: Experience working on Analytical Models and their deployment / production enable? ment via data & analytics pipelines
Posted 2 months ago
8 - 10 years
25 - 30 Lacs
Hyderabad
Work from Office
The primary responsibility of an Application Development Advisor in Cigna Specialty Technology (comprising Dental, Vision, Supplemental Health, and Stop Loss) is to deliver and support working software, where our team leverages technologies including JavaScript, Angular, HTML, Python, Java, .NET C#, Oracle PL/SQL, Oracle Apex, and other technologies. Software includes smaller-scale business tools, medium-scale business applications serving the needs of departmental units, larger-scale platforms that administer Cigna's Specialty product solutions offered to our customers and clients, and system-to-system integration services between software units to support the flow of information across systems for end-to-end processing. Application Development Advisors collaborate within agile scrum teams, delivering software using SDLC and Agile best practices, industry standards, and Cigna guidelines to meet the needs of our businesses and to ensure quality, effectiveness, and scalability for growth. Our team members work closely with business teams and other technology units within the Cigna Enterprise to deliver end-to-end solutions, requiring skills to understand the businesses where Cigna Specialty Technology operates; to refine the requirements for solutions needed and to uncover dependencies; to work closely with Systems Architects, Product Owners and Production Support teams in designing, coding, and testing; and ultimately to deliver working software that integrates within the broader Cigna ecosystem and enables our Cigna Specialty businesses We are looking to our application development advisors to bring a DevOps Mindset, characterized with an automation-first and continuous improvement orientation. Candidates should have expertise in software engineering as well as SDLC using Agile methodologies. Experience with both creating web-based applications either directly or with low-code tools, as well as designing and delivering integration services designed through developing an understanding of the interrelationships between systems and building software using library calls, REST APIs, database queries, etc. is most desired in order to achieve desired outcomes. In the end, the software we produce is all about creating business value for our customers and clients within the Cigna Specialty product space. Job Description & Responsibilities: Collaborate, learn, and deliver software as part of an Agile scrum team Learning the Cigna Supplemental Health businesses to work directly with Cigna Specialty business stakeholders in understanding needs and requirements Demonstrated skill in using coding standards, reusable code and being an active participant in code reviews. Strong understanding of development and testing techniques and toolsets. Design, configuration, implementation of middleware products and application design/development within the supported technologies and products Supporting applications through proactive monitoring and troubleshooting, and management design of supported applications assuring performance, availability, security, and capacity Incorporating automated testing, CI/CD, and an Agile/Lean mindset in both collaboration with colleagues as well as in software delivered Experience Required: Typically, 8+ years of experience in IT, specifically within application development or integration services development. +5 years of experience within the following areas are required: Web services experience using Python based frameworks Database experience, including data modelling and authoring stored procedures, leveraging either Oracle, MS SQL Server, or PostgreSQL Experience using Python Experience Desired: Experience within the following technologies is desired: Python Framework Oracle PL/SQL Stored Procedures and Web Services Web Development with UI development using HTML and JavaScript, leveraging either Angular or React Exposure to serverless event driven frameworks in - EC2, EKS, ECS, Lambda, Step functions, SQS, SNS, Jenkins pipelines, API Gateway AWS specific tools with Python developing required with Devops background. Experience with Agile development (SCRUM methodology) Good to have. Education and Training Required: Primary Skills: Advanced concepts with relevant, hands-on experience in many of the following areas are generally preferred: Build serverless, message, and event driven interfaces using available cloud solutions. Help design, create, and manage continuous delivery pipelines for your teams code and deliverables using Jenkins/GitHub/Airflow/Terraform. Message and event driven architecture Enterprise Integration Patterns Interface with data behind-the-scenes by creating APIs with Python with REST frameworks. Database development & tuning Performance (threading, indexing, clustering, caching) Transaction Management Document-centric data architecture (XML DB/NoSQL/JSON) UI development (HTML5, Angular, Bootstrap) Additional Skills: Constantly look and evaluate new technologies to see if they can bring value to the organization. Leverage existing open-source frameworks, third party components/libraries to develop robust enterprise solutions. Mentality towards integrating and reusing existing capabilities vs building from scratch is highly desired. Analytical Skills: Candidate must be able to recognize the needs of customers and create simple solutions that answer those needs. Ability to document analysis and scenarios is critical. Communication: Candidate must be able to clearly communicate their ideas to peers, stakeholders, and management. Creativity: Creativity is needed to help invent new ways of approaching problems and developing innovative applications as well as bringing experience from other industries. Customer-Service: If dealing directly with clients and customers, candidate would need good customer service skills and consultant mentality to answer questions and fix issues. Teamwork: Candidate must work well with others as part of a distributed agile (SAFe) team of developers, analysts, QA, and more. Industry Experience: Prior work experience within Insurance, Health Insurance, or Financial Services preferred.
Posted 2 months ago
8 - 13 years
15 - 30 Lacs
Bengaluru
Work from Office
Dear Candidate, We are hiring Python Tech Lead for Otomeyt AI. PFB the job description. Mandatory Skills - Python + AWS (Lambda, SES, SQS) + Boto3 + Rest API (Fast & Flask API) Job Summary: We are seeking a Python Technical Lead with strong expertise in software architecture, AI model integration, and cloud technologies. The ideal candidate will have 8+ years of experience in Python development and will play a crucial role in architecting scalable solutions , leading AI/ML integrations, and driving strategic technical decisions . This role requires a strong mix of hands-on development, technical leadership, and system design to ensure high performance, security, and scalability of AI-powered applications. Key Responsibilities: 1. Technical Leadership & Architectural Decisions Define and implement high-level software architecture for AI-driven applications. Establish best practices and coding standards to ensure code quality and maintainability. Make critical technical decisions around system design, cloud infrastructure, and microservices architecture. 2. AI/ML Model Integration & Deployment Collaborate with data scientists to integrate and optimize AI/ML models into production environments. Develop robust AI pipelines for model training, inference, and continuous learning. Implement MLOps best practices to automate model deployment and monitoring. 3. Backend & API Development Design and develop high-performance, scalable RESTful APIs using Python frameworks (FastAPI, Flask). Implement event-driven architectures (Kafka, RabbitMQ, Redis Pub/Sub) to optimize AI workflows. Manage database design and performance tuning for structured (PostgreSQL) and unstructured (NoSQL, DynamoDB) data. 4. Cloud & DevOps Lead cloud strategy and architecture decisions using AWS . Oversee CI/CD pipelines , Docker, Kubernetes, and serverless deployment strategies. Ensure security and compliance best practices for data protection and API security. 5. Team Mentorship & Collaboration Act as a technical mentor , guiding junior and mid-level engineers. Work closely with product managers, DevOps, and AI researchers to align business and technical goals. Conduct code reviews, technical training, and architectural discussions. Required Skills & Qualifications: 8+ years of Python development experience with a focus on scalability, architecture, and AI integration. Proven experience in technical leadership, mentoring, and decision-making. Strong background in system design, microservices, and event-driven architectures. Hands-on experience with AWS, including Lambda, ECS, S3, API Gateway, DynamoDB, SQS, SES. Experience with Docker, Kubernetes, Terraform, and CI/CD pipelines. Knowledge of security best practices, API authentication, and role-based access controls (RBAC). Strong problem-solving skills and ability to lead projects end-to-end. Preferred Qualifications: Familiarity with Generative AI applications. Exposure to GraphQL, WebSockets, and real-time data processing. Previous experience as a Technical Lead or Principal Engineer. Why Join Us? Opportunity to lead AI-driven innovation in a cutting-edge product. Work with a talented team of engineers, AI scientists, and cloud architects. A fast-paced environment where your technical decisions will have a direct impact. Thanks Garima
Posted 2 months ago
7 - 12 years
27 - 40 Lacs
Bengaluru
Hybrid
The candidate will have expertise in AWS serverless architecture, event-driven programming, and microservices design patterns. should be designing, developing, and deploying AWS services like Lambda, API Gateway, SQS, SNS, and Step Functions.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2