Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
7 - 11 Lacs
bengaluru
Work from Office
The Company Oracle is the worlds leading provider of business software. With a presence in over 175 countries, we are one of the biggest technology companies on the planet. We're using innovative emerging technologies to tackle real-world problems today. From advancing energy efficiency to re-imagining online commerce, the work we do is not only transforming the world of businessit is helping advance governments, power nonprofits, and giving billions of people the tools they need to outpace change. For more information about Oracle (NYSE:ORCL), visit us at . Oracles commitment to R&D is a driving factor in the development of technologies that have kept Oracle at the forefront of the computer industry. If you are passionate about advanced development and working on the next-generation large-scale distributed systems for the most popular open source database in the world, which is optimized for the cloud providing the best performance, we would like to talk with you. What you will do: The HeatWave service team is responsible for the massively parallel, high performance, in-memory query accelerator. HeatWave is 6.5X faster than Amazon Redshift at half the cost, 7X faster than Snowflake at one-fifth the cost, and 1400X faster than Amazon Aurora at half the cost. It is the only cloud-native database service that combines transactions, analytics, and machine learning services into HeatWave, delivering real-time, secure analytics without the complexity, latency, and cost of ETL duplication.. This eliminates the need for complex, time-consuming, and expensive data movement and integration with a separate analytics database. The new MySQL Autopilot uses advanced machine-learning techniques to automate HeatWave, which make it easier to use and further improves performance and scalability. Join us to help further develop this amazing technology. This cutting edge technology serves critical business needs, which is changing the way data transactions function, all over the world. You will make a technical impact on the world with the work you do. Join a fun and flexible workplace where youll enhance your skills and build a solid professional foundation. As a Cloud Operations engineer for Oracle's Heatwave service team you will contribute to an exciting team working on one of the hottest cloud services. You will use your skills to learn how to constantly deliver and improve on these tremendous cloud services. Operations work will include troubleshooting production issues and handling requests for upgrades, patches or modifications. When not working on operations you will be working on software engineering tasks such as review of incidents to drive improvement of services, tools or runbooks to increase our reliability, scalability and reduce operational overhead through automation, training, documentation, service enhancement, or process. This position has the opportunity to leverage and learn the ins and outs of current cloud service architecture, deployment, monitoring and operational technologies. There are many useful and desirable skills which will be acquired if not already present. See below for the many cool and current technologies in play. The ideal candidate has some of the many skills, but key is the motivation and ability to learn quickly as well as a passion for an excellent customer experience. Learn more at Career Level - IC2 Responsibilities Engineer will: Improve monitoring, notifications, configuration of Heatwave services. Perform proactive service checks and monitor/triage and address incoming system/application alerts, email and phone calls to ensure appropriate priority and meeting service SLA response time. Triage and troubleshoot service impacting events from multiple signals including phone, email, service telemetry and alerting. Participate in activities for services such as upgrades and patching. Identify and work with engineering to implement opportunities for automation, signal noise reduction, recurring issues and other actions to reduce time to mitigate service impacting events and increase the productivity of cloud operations and development resources. Coordinate, document and track critical incidents ensuring rapid and complete issue resolution and an appropriate closed loop to customers and other key stakeholders. Contribute to a healthy, supportive and inclusive team culture. Provide feedback to development teams about operations administration dashboards functionality and UIs. Up-skill by learning new features delivered for the service in accordance with the product roadmap. Improve the availability, scalability, latency, ease of use, and efficiency of service control plans and operational tooling. Participate in service capacity planning and demand forecasting, software performance analysis and system tuning. Support secondary Heatwave on AWS cloud service as per business requirement. Potentially participate in regular rotations as a central part of the 24x7 Operations team, Includes rotational work on weekends, Public Holidays, US East timezone shifts. Need to be reliable in terms of working scheduled hours. Need to be motivated quick learners. Desiredskills include AWS-specific skills which would be a plus but are not strictly required: Familiarity with AWS services (e.g.: Lambda service, Step functions, DynamoDB, AWS Session manager, CloudWatch, etc.). Familiarity with OCI or equivalent Cloud services (e.g.: IAM, Compute, Load Balancer, Object Storage, Health Monitor). General skills for working in this operational role: Basic understanding of serverless cloud architecture. Familiarity with MySQL database, SQL query interface and general database concepts. Experience with Python programming, bash scripting & Git. Basic Linux system administration knowledge and experience and familiarity with Linux troubleshooting and internals. Familiarity with Networking concepts, DevOps model. Work productively in a fast-paced, team-oriented Agile environment. Contribute to operational activities such as writing runbooks, troubleshooting, operations automation, and instrumentation for metrics and events. Good technical writing and communication skills. Engineers will need to be able to clearly write descriptions of operational issues and corrective actions for incidents. Experience with Agile methodology (Scrum or Kanban). Very strong analytical skills to identify problem root causes. Experience in collaborating with cross-functional teams like Development, QA, Product management, etc. Systematic problem-solving approach, combined with a strong sense of ownership and drive in resolving operations issues. Experience working under pressure to mitigate customer issues affecting service reliability, data integrity, and overall customer experience. Monitoring, management, analysis and troubleshooting of large-scale, distributed systems. BS/BE or MS/Mtech degree in Computer Science, Electrical/Hardware Engineering or related field. 2+ years experience delivering and operating large scale, highly available distributed systems. 2+ years of work experience as a Software, Site reliability, Operations or Customer Support engineer. Qualifications Career Level - IC2
Posted 1 day ago
5.0 - 8.0 years
14 - 17 Lacs
hyderabad, bengaluru
Hybrid
Were Hiring | Java Backend Developers (AWS Lambda / Kafka)* Locations: Bangalore (5+ Yrs) | Hyderabad (6+ Yrs) Work Mode: Hybrid Key Skills (Mandatory): Java 8 Spring Boot Microservices AWS (Lambda Functions) *or* Kafka (depending on role priority) Good to Have: JavaScript integration experience Responsibilities: Design, develop, and maintain scalable backend services using Java 8 & Spring Boot Build and integrate microservices architecture Work with AWS services (Lambda functions) and/or implement Kafka-based messaging solutions Ensure high performance, scalability, and seamless system integrations Collaborate with frontend teams for smooth JavaScript/API integrations We have *two parallel opportunities one focused on **AWS Lambda (server less)* and another on *Kafka (messaging/streaming)*.
Posted 4 days ago
5.0 - 10.0 years
6 - 9 Lacs
noida, gurugram
Work from Office
5+ years of overall technical experience Minimum 2+ years experience in Node.js Minimum 1-year relevant experience in team management. Experience with Angular/ Reactjs / lambda Services / PHP is an added advantage.
Posted 4 days ago
2.0 - 7.0 years
3 - 6 Lacs
gurugram
Work from Office
Backend Developer - Node js + Mongo DB + PHP + Mysql / 5+ Yr Exp Job description - Design, implement and support the technical solution. Participate actively in all phases of the application development lifecycle.
Posted 4 days ago
5.0 - 10.0 years
10 - 15 Lacs
noida, gurugram
Work from Office
5+ years of overall technical experience Minimum 2+ years experience in Node.js Minimum 1-year relevant experience in team management. Experience with Angular/ Reactjs / lambda Services / PHP is an added advantage.
Posted 5 days ago
5.0 - 10.0 years
10 - 15 Lacs
noida, gurugram
Work from Office
5+ years of overall technical experience Minimum 2+ years experience in Node.js Minimum 1-year relevant experience in team management. Experience with Angular/ Reactjs / lambda Services / PHP is an added advantage.
Posted 5 days ago
5.0 - 10.0 years
10 - 12 Lacs
noida, gurugram
Work from Office
5+ years of overall technical experience Minimum 2+ years experience in Node.js Minimum 1-year relevant experience in team management. Required Candidate profile Experience with Angular/ Reactjs / lambda Services / PHP is an added advantage.
Posted 5 days ago
5.0 - 10.0 years
10 - 15 Lacs
noida, gurugram
Work from Office
5+ years of overall technical experience Minimum 2+ years experience in Node.js Minimum 1-year relevant experience in team management. Experience with Angular/ Reactjs / lambda Services / PHP is an added advantage.
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
coimbatore, tamil nadu
On-site
Role Overview: At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Key Responsibilities: - Lead the design, implementation, and maintenance of scalable ML infrastructure. - Collaborate with data scientists to deploy, monitor, and optimize machine learning models. - Automate complex data processing workflows and ensure data quality. - Optimize and manage cloud resources for cost-effective operations. - Develop and maintain robust CI/CD pipelines for ML models. - Troubleshoot and resolve advanced issues related to ML infrastructure and deployments. - Mentor and guide junior team members, fostering a culture of continuous learning. - Work closely with cross-functional teams to understand requirements and deliver innovative solutions. - Drive best practices and standards for ML Ops within the organization. Qualification Required: - Minimum 5 years of experience in infrastructure engineering. - Proficiency in using EMR (Elastic MapReduce) for large-scale data processing. - Extensive experience with SageMaker, ECR, S3, Lambda functions, Cloud capabilities, and deployment of ML models. - Strong proficiency in Python scripting and other programming languages. - Experience with CI/CD tools and practices. - Solid understanding of the machine learning lifecycle and best practices. - Strong problem-solving skills and attention to detail. - Excellent communication skills and ability to work collaboratively in a team environment. - Demonstrated ability to take ownership and drive projects to completion. - Proven experience in leading and mentoring teams. Additional Details of the Company: EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 5 days ago
0.0 years
0 Lacs
bengaluru, karnataka, india
On-site
We are seeking a skilled AEM DevOps / Release Engineer who will be responsible for managing code deployments, release processes, and platform-level configurations across Adobe Experience Manager (AEM) environments. The role requires strong expertise in CI/CD tools, AEM platform operations, and cloud infrastructure (AWS). Key Responsibilities:- Code Deployment & Release Management Manage end-to-end release cycles, including deployment planning and execution. Automate and monitor build pipelines using Jenkins and Bitbucket. Implement and maintain branching/merging strategies for stable releases. AEM Platform Management Configure and support AEM environments (Author, Publish, Dispatcher). Setup and maintain OOTB AEM workflows and provide support for custom workflows. Perform dispatcher rule changes and coordinate DNS updates during deployments. Integrate and troubleshoot Adobe Analytics and Adobe Target within AEM. Ensure secure access via IP whitelisting and related configurations. Cloud & Infrastructure (AWS) Work with AWS to configure infrastructure components as needed. Provision and manage new databases for applications. Setup and maintain Lambda functions to support AEM and business integrations. Operational Support :- Monitor application health and troubleshoot deployment issues. Collaborate with developers, QA, and business stakeholders to ensure smooth release cycles. Document deployment procedures, workflows, and configurations. Required Skills & Experience :- Hands-on experience with CI/CD tools (Jenkins, Bitbucket, Git). Strong knowledge of Adobe Experience Manager (AEM) platform and dispatcher configurations. Experience with Adobe Analytics and Adobe Target integrations. Familiarity with release management processes in enterprise environments. Good understanding of IP whitelisting, DNS updates, and network-level configurations. Experience with AWS services databases (RDS/DynamoDB), Lambda functions, networking. Strong troubleshooting and problem-solving skills. Ability to work in cross-functional teams (Dev, QA, Infra, Business). Nice to Have:- Knowledge of AEM as a Cloud Service (AEMaaCS). Exposure to automation scripting (Python, Shell, or Groovy). Experience in monitoring tools (CloudWatch, New Relic, Datadog). Role Attributes:- Strong communication and collaboration skills. Proactive in identifying risks and proposing solutions. Ability to manage multiple releases and priorities in a fast-paced environment. Show more Show less
Posted 6 days ago
2.0 - 5.0 years
0 Lacs
gurugram
Work from Office
Develop full-stack forecasting and simulation tools using React, Node.js, and Python on AWS Modernize Excel/VBA tools into scalable cloud-native web applications Deploy solutions efficiently with AWS services, CI/CD, and infrastructure-as-code tools Food allowance Annual bonus Health insurance Provident fund
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You will be responsible for designing and implementing highly performant algorithms to process, transform, and analyze large volumes of data. You will apply advanced DSA concepts like Trees, Graphs, Tries, Heaps, and Hashing for data indexing, filtering, and routing. Additionally, you will develop and optimize data pipelines, stream processors, or caching systems, and architect scalable systems for data ingestion, storage, and retrieval (structured/unstructured). Collaboration with cross-functional teams to integrate and deploy performant services will be a key part of your role. You will also perform profiling, tuning, and memory optimization to ensure low-latency operations. Writing clean, modular, testable code and participating in code reviews will be essential responsibilities. To be successful in this role, you should have a strong command over Core DSA concepts such as Binary Search, Heaps, Graphs, Tries, and Trees (AVL, B-Trees, Segment Trees). Hands-on experience with algorithms for sorting, searching, indexing, and caching large datasets is required. Proficiency in one or more of the following languages is necessary: Java, Python. You should also have experience working with large datasets in real-time or batch, along with a solid grasp of time and space complexity and performance tuning. Familiarity with memory management, garbage collection, and data locality is important. Deep technical knowledge and hands-on experience in architecture design, development, deployment, and production operation are crucial. Familiarity with agile software development and modern development tools and frameworks is expected, along with strong engineering principles, including automation, quality, and best practices with a high bar. Extensive experience in the complete software development life cycle E2E, including production monitoring, will be beneficial. It would be good to have a broad understanding of Data Lakehouse formats like Apache Hudi, Apache Iceberg, or Delta Lake. Demonstrable experience in Spark programming and experience with Spark on DBT with AWS Glue or Apache Polaris is a plus. A broad understanding of cloud architecture tools and services, such as S3, EMR, Kubernetes, and Lambda functions, is desirable. Experience in AWS and Azure is also highly desirable. Rich experience and deep expertise in Big Data and large-scale data platforms, especially in Data Lake, would be advantageous.,
Posted 1 week ago
6.0 - 11.0 years
15 - 25 Lacs
pune
Hybrid
Deliver Product features e2e on AWS/Azure Cloud Collaborate directly with US client on product/feature requirements Independently design e2e features/modules Leverage Copilot/Cursor to maximize automation in dev cycle Oversee QA cycle on features Required Candidate profile 5+ y of exp in building Cloud apps on AWS or Azure Python: DJango or Flask or FastAPI RDBMS & NoSQL DB System architecture and design Exp across programming languages
Posted 1 week ago
9.0 - 13.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As a Senior Python Developer, you will be based in Trivandrum or Kochi with 9-12 years of experience. Your primary skills must include expertise in Python, AWS services such as EC2, RDS, SES, S3, Lambda, CI/CD practices, Jenkins, GitHub, SQL query optimization, Object-Oriented Programming in Python, DevOps knowledge, Visual Studio Code, Agile Methodology (Scrum, Jira), Test-Driven Development (TDD), Software Design Patterns, and mentorship experience. It would be beneficial if you have experience with Python Lambda Functions in AWS, Banking and Financial Domain exposure, migrating applications across different tech stacks, familiarity with Scrum and Jira for project management, knowledge of Unit Tests and Integration Tests, and possess a data-driven decision-making mindset. Your responsibilities will involve designing, developing, and deploying solutions for the business unit, ensuring high-quality software delivery, implementing process and technical enhancements, mentoring junior developers, contributing across various layers of the stack, engaging in data-driven decision-making, and participating in Agile processes to ensure quality delivery using Scrum. The ideal candidate profile we are looking for is an expert Python developer with extensive AWS experience, exceptional communication skills, the ability to work independently with minimal guidance, strong problem-solving capabilities for implementing technical and non-technical improvements, and a team player who collaborates effectively with cross-functional teams.,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
As a Technical Support Specialist, you will be responsible for diagnosing and troubleshooting technical issues across customer and lender journeys. Utilizing your skills in troubleshooting, customer service, and technical knowledge, you will read AWS logs, troubleshoot on postman, and effectively report errors to the tech team. Your role will also involve creating and maintaining comprehensive documentation, including troubleshooting guides, FAQs, and knowledge base articles. This documentation will assist customers in independently resolving common issues, ensuring clarity, conciseness, and alignment with the latest product releases. Providing exceptional customer support is key in this position. You will promptly respond to partner queries, guide customers through product features, functionalities, and best practices, aiming to enhance their usage and satisfaction. Collaboration with cross-functional teams such as software development, product management, and customer experience teams is essential. By escalating and prioritizing technical issues, driving product improvements, and advocating for customer needs and feedback, you will contribute to the overall success of the product. To excel in this role, you should have a bachelor's degree in BTech/B.E. along with at least 1 year of experience with SQL, AWS logging, and Postman. Additionally, familiarity with HTML5, CSS3, JavaScript, and AWS tools like App Runner and Lambda functions will be beneficial. Your willingness to adapt to a startup environment, understanding of general web functions and standards, and proficiency in tools such as Github, JIRA, and Database viewers (DBeaver, PgAdmin) are valuable assets that will support your success in this position.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Python backend developer with experience in AWS services, your role will involve designing, implementing, and operating both streaming and batch pipelines that can scale effectively. You will collaborate with engineers and data analysts to create reliable datasets that are not only trustworthy but also easily understandable and accessible to the entire organization. Working in a fast-paced start-up environment, you will be expected to exhibit passion for your work and thrive in an international setting. While a background in the telecom industry is advantageous, it is not a mandatory requirement. Your responsibilities will include writing efficient lambda functions for lightweight data processing, as well as gathering requirements and devising solutions that prioritize simplicity of design, operational efficiency, and stakeholder considerations across various teams. Proficiency in the following technologies is essential for this role: - Decryption - Routing - Filtering - Python or Go - Data modeling - Iceberg on AWS S3 If you are someone who is adept at leveraging AWS services and has a strong foundation in Python development, this role offers an exciting opportunity to contribute to the development of robust data pipelines and datasets within a dynamic and innovative work environment.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be responsible for leading the design, development, and deployment of backend services using Python (Django) and PostgreSQL. Your role will involve architecting and optimizing the cloud infrastructure on AWS/GCP to ensure high availability and scalability. You should have proven experience in designing and building scalable, high-performing algorithms to solve complex problems efficiently in high-demand environments. Additionally, you will lead, mentor, and inspire the backend engineering team, fostering a culture of collaboration and innovation. Collaboration with product teams to identify solutions for business needs and oversee their technical implementation will be part of your responsibilities. Troubleshooting and debugging issues in production environments and staying current with new technologies and best practices to drive continuous improvements are also essential. Furthermore, you will provide technical guidance and best practices to ensure code quality and system performance, set coding standards, conduct code reviews, and mentor the team on best practices. For this role, you should have at least 8 years of experience in backend development, with a focus on Python (Django) and relational databases like PostgreSQL. A Master's degree or higher in Computer Science, Engineering, or related fields is required. You must demonstrate proven experience in building and scaling applications on AWS/GCP, expertise in LLM-supported search algorithms, and building OpenAPI to support marketplace integrations. Proficiency in working with public, private, and hybrid cloud environments, building and managing scalable cloud infrastructure, familiarity with container technologies like Docker, and hands-on experience with cloud technologies such as message queues, lambda functions, storage buckets, NoSQL databases, and Relational databases are necessary. Experience with multi-tenancy and distributed databases, OAuth 2.0, two-factor authentication (2FA), and evolving authentication methods like biometrics, adaptive access control, and zero-trust security models is also expected. You should be skilled in designing, developing, and supporting microservices using mainstream programming languages like Java, Python, or C++, possess strong knowledge of network protocols and scalable networking technologies, and have experience working with cloud environments such as GCP, Azure, or AWS. Knowledge of Platform as a Service (PaaS) technologies like Kubernetes, competence in conducting code reviews and architecture reviews, strong communication skills, attention to detail, and experience mentoring teams are essential. Demonstrated leadership abilities, with experience guiding engineering teams, are also required for this position. This job was posted by Jhansi Peter from Linarc.,
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
haryana
On-site
You will be responsible for designing, building, and maintaining scalable and efficient data pipelines to facilitate the movement of data between cloud-native databases (e.g., Snowflake) and SaaS providers using AWS Glue and Python. Your role will involve implementing and managing ETL/ELT processes to ensure seamless data integration and transformation while adhering to information security and compliance with data governance standards. Additionally, you will be tasked with maintaining and enhancing data environments, including data lakes, warehouses, and distributed processing systems. It is crucial to utilize version control systems (e.g., GitHub) effectively to manage code and collaborate with the team. In terms of primary skills, you should possess expertise in enhancements, new development, defect resolution, and production support of ETL development using AWS native services. Your responsibilities will also include integrating data sets using AWS services such as Glue and Lambda functions, utilizing AWS SNS for sending emails and alerts, authoring ETL processes using Python and PySpark, monitoring ETL processes using CloudWatch events, connecting with different data sources like S3, and validating data using Athena. Experience in CI/CD using GitHub Actions, proficiency in Agile methodology, and extensive working experience with Advanced SQL are essential for this role. Furthermore, familiarity with Snowflake and understanding its architecture, including concepts like internal and external tables, stages, and masking policies, is considered a secondary skill. Your competencies and experience should include deep technical skills in AWS Glue (Crawler, Data Catalog) for over 10 years, hands-on experience with Python and PySpark for over 5 years, PL/SQL experience for over 5 years, CloudFormation and Terraform for over 5 years, CI/CD GitHub actions for over 5 years, experience with BI systems (PowerBI, Tableau) for over 5 years, and a good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda for over 5 years. Additionally, familiarity with Jira and Git is highly desirable. This position requires a high level of technical expertise in AWS Glue, Python, PySpark, PL/SQL, CloudFormation, Terraform, GitHub actions, BI systems, and AWS services, along with a solid understanding of data integration, transformation, and data governance standards. Your ability to collaborate effectively with the team, manage data environments efficiently, and ensure the security and compliance of data will be critical for the success of this role.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
nagpur, maharashtra
On-site
As a Technical Product Owner at our company, you will play a crucial role in leading the development and growth of our product. With a minimum of 6 years of experience, including at least 3 years as a technical product owner, you will be responsible for defining the product vision, road-map, and growth opportunities. Your expertise in technology stack, such as GoLang, AWS, DynamoDB, Redis, Lambda functions, and Express Step function, will be essential in collaborating with Architects and Tech leads to define system requirements and technology solutions. You will work closely with Customer Product Management to prioritize product feature backlog and development, ensuring alignment with the product strategy. Conducting agile ceremonies involving client stakeholders efficiently will be part of your responsibilities. Additionally, you will be involved in backlog management, iteration planning, and elaboration of user stories. Researching and analyzing the market, users, and the product roadmap will be crucial for you to stay updated with industry trends and competitors. Your ability to lead the planning of product release plans and mitigate impediments impacting team completion of goals will be key to the success of our product. To excel in this role, you must possess exceptional organizational skills, attention to detail, and a strong analytical mindset. Being self-directed, able to work independently, and thrive in a fast-paced environment are essential qualities. Your high energy, can-do attitude, and strong networking skills will enable you to lead a team effectively and maintain multiple projects without compromising results. Joining our team at GlobalLogic means being part of a high-trust organization that values integrity, continuous learning, and a culture of caring. You will have the opportunity to work on interesting and meaningful projects, collaborate with supportive teammates and leaders, and grow both personally and professionally. We offer a culture of balance and flexibility, where your well-being and work-life integration are prioritized. At GlobalLogic, we are committed to engineering impact for our clients worldwide and shaping the future through intelligent products and services. As a Technical Product Owner, you will have the chance to reimagine what's possible, contribute to cutting-edge solutions, and make a meaningful difference in the digital world.,
Posted 2 weeks ago
0.0 years
0 Lacs
pune, maharashtra, india
On-site
Description In-depth understanding of how to use and the roles of the tools for specific DevOps functions. Design, develop and maintain CI and CD pipelines Develop automation playbooks for deployments, configuration management, provisioning, reporting and other recurring tasks Ensure that deployment platforms are scalable and conform to enterprise standards General understanding of pipeline technology involving Jenkins-Kubernetes with respect to Openshift. Familiar with secDevOps technologies like Kubernetes, Docker, Openshift Management and monitoring of AWS Platform Good understanding and hands on with services aws services VPC , KMS, S3, IAM, EC2, AWS transfer family ,Lambda , Cloud watch, Service Catalog, Code Commit Ability to analyze CloudWatch logs and troubleshoot any issues. Good understanding of SCPs, Control Tower, Stack Set and Lambda Functions Python or Golang (Intermediate Knowledge required) GitHub Actions (Overall CI/CD Knowledge is important/good to have) Can troubleshoot issues with existing workflow using workflow run logs, debug logging etc Kubernetes, Docker, Helm (Basic to Intermediate Experience Good Understanding and hands on experience with Terraform (basic concepts, modules, state management, e2e working etc) Troubleshoot issue with existing code load balancing, network security, standard network protocols A Bachelor&aposs degree in Computer Science or related field (Masters preferred) Ability to create medium solutions using Procedures and Classes Good understanding and working experience on Python Dicts, JSON and Yaml Ability to review code by other programmers. Ability to create medium-level solutions using Functions, Structs, and Interfaces Good understanding and working experience with libraries. Can read and understand existing GitHub workflow code. Understanding of expressions, context, environment variables, jobs, runner, secrets use Can read code in HCL. Can Deploy resources using Terraform. Show more Show less
Posted 2 weeks ago
0.0 years
0 Lacs
pune, maharashtra, india
On-site
Description Responsible for design, implementation and maintenance of infrastructure and various services hosted on AWS cloud. Good understanding of infrastructure-as-code, deployment orchestration and application tooling. Specialization in specific areas, such as processes, e.g. CI/CD, or specific platforms, e.g. AWS Cloud. The candidate should be well versed in modern agile delivery concepts such as Kanban, Scrum, etc. Responsible for the planning, implementation, and growth of the AWS cloud infrastructure Stay current with new services introduced by AWS and vendor products, evaluating which ones would be a good fit for the company Experience designing and building web environments on AWS, which includes working with services like EC2, ELB, RDS, and S3 Experience building and maintaining cloud-native applications A solid background in Linux/Unix and Windows server system administration Design, develop and maintain CI and CD pipelines Develop automation playbooks for deployments, configuration management, provisioning, reporting and other recurring tasks Ensure that deployment platforms are scalable and conform to enterprise standards General understanding of pipeline technology involving Jenkins-Kubernetes with respect to Openshift. Management and monitoring of AWS Platform Good understanding of key services like CloudFormation, KMS, S3, IAM, EC2, Service Catalog, Code Commit Ability to analyze CloudWatch logs and troubleshoot any issues. Good understanding of SCPs, Control Tower, Stack Set and Lambda Functions Python or Golang (Intermediate Knowledge required) GitHub Actions (Overall CI/CD Knowledge is important/good to have) Troubleshoot issues with existing workflow using workflow run logs, debug logging etc Kubernetes, Docker, Helm (Basic to Intermediate Experience Good Understanding of Terraform (basic concepts, e2e working) Troubleshoot issue with existing code load balancing, network security, standard network protocols Ability to create medium solutions using Procedures and Classes Good understanding and working experience on Python Dicts, JSON and Yaml Ability to review code by other programmers Ability to create medium-level solutions using Functions, Structs, and Interfaces Good understanding and working experience with libraries Read and understand existing GitHub workflow code Understanding of expressions, context, environment variables, jobs, runner, secrets use Deploy resources using Terraform Show more Show less
Posted 2 weeks ago
0.0 years
0 Lacs
pune, maharashtra, india
On-site
Description Responsible for design, implementation and maintenance of infrastructure and various services hosted on AWS cloud. Good understanding of infrastructure-as-code, deployment orchestration and application tooling. Specialization in specific areas, such as processes, e.g. CI/CD, or specific platforms, e.g. AWS Cloud. The candidate should be well versed in modern agile delivery concepts such as Kanban, Scrum, etc. Responsible for the planning, implementation, and growth of the AWS cloud infrastructure Stay current with new services introduced by AWS and vendor products, evaluating which ones would be a good fit for the company Experience designing and building web environments on AWS, which includes working with services like EC2, ELB, RDS, and S3 Experience building and maintaining cloud-native applications A solid background in Linux/Unix and Windows server system administration Design, develop and maintain CI and CD pipelines Develop automation playbooks for deployments, configuration management, provisioning, reporting and other recurring tasks Ensure that deployment platforms are scalable and conform to enterprise standards General understanding of pipeline technology involving Jenkins-Kubernetes with respect to Openshift. Management and monitoring of AWS Platform Good understanding of key services like CloudFormation, KMS, S3, IAM, EC2, Service Catalog, Code Commit Ability to analyze CloudWatch logs and troubleshoot any issues. Good understanding of SCPs, Control Tower, Stack Set and Lambda Functions Python or Golang (Intermediate Knowledge required) GitHub Actions (Overall CI/CD Knowledge is important/good to have) Troubleshoot issues with existing workflow using workflow run logs, debug logging etc Kubernetes, Docker, Helm (Basic to Intermediate Experience Good Understanding of Terraform (basic concepts, e2e working) Troubleshoot issue with existing code load balancing, network security, standard network protocols Ability to create medium solutions using Procedures and Classes Good understanding and working experience on Python Dicts, JSON and Yaml Ability to review code by other programmers Ability to create medium-level solutions using Functions, Structs, and Interfaces Good understanding and working experience with libraries Read and understand existing GitHub workflow code Understanding of expressions, context, environment variables, jobs, runner, secrets use Deploy resources using Terraform Show more Show less
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
haryana
On-site
As an ETL Developer with our team, you will be responsible for a range of tasks including enhancements, new development, defect resolution, and production support of ETL development utilizing AWS native services. Your expertise will be crucial in integrating data sets through AWS services such as Glue and Lambda functions. Additionally, you will be utilizing AWS SNS for sending emails and alerts, authoring ETL processes using Python and PySpark, and monitoring ETL processes using CloudWatch events. Your role will also involve connecting with various data sources like S3, validating data using Athena, and implementing CI/CD processes using GitHub Actions. Proficiency in Agile methodology is essential for effective collaboration within our dynamic team environment. To excel in this position, you should possess deep technical skills in AWS Glue (Crawler, Data Catalog) with at least 5 years of experience. Hands-on experience with Python, PySpark, and PL/SQL is required, with a minimum of 3 years in each. Familiarity with CloudFormation, Terraform, and CI/CD GitHub actions is advantageous. Additionally, having experience with BI systems such as PowerBI and Tableau, along with a good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda, will be beneficial for this role. If you are a detail-oriented professional with a strong background in ETL development and a passion for leveraging AWS services to drive data integration, we encourage you to apply for this exciting opportunity on our team.,
Posted 2 weeks ago
0.0 years
0 Lacs
pune, maharashtra, india
On-site
Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant, Python and SQL Responsibilities Snowflake ETL Developer with extensive experience in SQL and Python. relevant experience and skilled in - Python and SQL Experience in SQL including tables, views, joins, CTE. Exposure to data movement using python libraries Exposure to logging, lambda functions, OOPS and connecting with different database sources - able to fetch data. Exposure python libraries like pandas, etc. Experienced in SQL performance tuning and troubleshooting Knowledge on CI/CD using Jenkins/Bamboo Experience in Agile/Scrum based project executions with exposure to JIRA or similar Good to have experience in orchestration using Airflow. Good to have python experience with hands-on experience on pandas, snowpark , data crunching, connecting other RDBMS. Very good SQL expertise in Joins, Analytical functions. Cloud knowledge on AWS services like S3, Lambda, Athena or Blob Team player, collaborative approach and excellent communication skills Candidate should be good in communication and articulation Qualifications we seek in you Minimum qualifications Graduate/B Tech/MCA Knowledge/exp. of Banking or Capital Markets is an added advantage for this position Why join Genpact Lead AI-first transformation - Build and scale AI solutions that redefine industries Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career &mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up . Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the Broadridge team, you will be part of a culture that values empowerment and collaboration. We are looking for individuals who are passionate about advancing their own careers while also supporting and helping others grow. Key requirements for this role include experience in Java technologies such as Core Java, J2EE, Webservices (SOAP, REST), Spring, Hibernate, Groovy, and Drools. Additionally, proficiency in Microservices, Spring Boot, and Apache Camel is highly desirable. A strong understanding of AWS and Lambda functions is essential for this position, and familiarity with tools like Gradle/Maven, Git, Log4J, Mockito, and Kafka is a plus. Experience with DynamoDB or any other NoSQL database is preferred. Candidates should also have experience working in Agile methodology, as it is a key aspect of our development process. If you are ready to take on new challenges and contribute to a collaborative team environment, we invite you to join us at Broadridge.,
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |