Jobs
Interviews

537 Ecs Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 15.0 years

30 - 35 Lacs

Bengaluru

Work from Office

FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity "We are seeking a highly skilled and forward-thinking Cyber Security Sr. Manager to lead and strengthen our security posture in the domains of data protection, AI system integrity, and customer onboarding. This role is pivotal in driving secure data lifecycle practices, safeguarding AI models, and ensuring regulatory adherence across data-driven and AI-powered platforms."- Sr. Director, Cyber Security What Youll Contribute Lead a team of cybersecurity professionals and provide mentorship and career development support. Ensure compliance with internal policies, industry standards (e.g., NIST, ISO 27001), and regulatory frameworks. Design, implement, and oversee security controls for enterprise data platforms and AI systems (e.g., ML pipelines, LLM integrations, analytics environments). Manage incident response plans related to data management, model poisoning, or data leakage from model outputs. What Were Seeking 12-15 years of relevant experience in Cyber Security domain with 5 - 7 years of leadership experience. Bachelor's degree in MIS, computer science (or related field) or equivalent combination of education and experience. 4 years of experience with enterprise technology design, deployment and support. Strong knowledge of data privacy laws (e.g., GDPR, CCPA), cloud security (e.g., AWS, GCP). Experience on integration with SIEM tool like Splunk Cloudis mandatory. Experience with data security technologies (DLP, tokenization, encryption)is a plus. Experience working on containerized solutions with Docker, Kubernetes using ECR, ECS and EKS services in AWSis preferred. Experience with AWS and implementing best practices in regard to securing cloud infrastructure and cloud servicesis preferred. Experience in Python scripting or programming languages with an automation mindsetis a plus. Excellent interpersonal, management, and customer service skills. Excellent written and verbal communication skills. Subject matter expert in the design, implementation and support of enterprise cloud technologies. High degree of initiative, self-motivation and follow through. Knowledge of ITIL concepts including Service Management and Service Delivery. Proven history of incident response, diagnostic activities, Root Cause Analysis (RCA), Corrective Action Plans, and advanced troubleshooting. Highly developed analytical skills and the ability to solve complex technical problems using a methodical systematic approach. Our Offer to You High performance culture promoting recognition, rewards and professional development. An inclusive culture strongly reflecting our core valuesAct Like an Owner, Delight Our Customers and Earn the Respect of Others. Competitive base salary coupled with attractive role-specific incentive plan. Comprehensive benefits program. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at https://www.fico.com/en/privacy-policy

Posted 4 days ago

Apply

4.0 - 9.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Job Purpose and Impact The Sr Cloud Security Engineer will help solidify foundation for the company's modern business applications. In this role, you will apply your knowledge of cybersecurity and cloud engineering practices to secure and operate Infrastructure as a Service and Platform as a Service used by our data and application teams to drive business value. You will also coach and mentor junior engineers to deliver highly scalable security solutions using automation. Key Accountabilities Develop and maintain security and resiliency of enterprise-grade cloud services to support Cargills mission Implement and manage cloud security solutions across AWS & Azure Automate security policies and workflows using scripting languages and cloud native security tools to improve efficiency and scalability Collaborate with cross-functional teams (enterprise IT product teams, Cloud Center of Excellence, DevOps, Software Engineering, Compliance) to integrate security practices into cloud deployments Assist with incident response activities for cloud-related security incidents, including investigation, containment, remediation, and post-mortem analysis Lead stakeholder interactions and mentor more junior engineers Qualifications Minimum requirement of 4 years of relevant work experience. Typically reflects 5 years or more of relevant experience. Experience with AWS networking security tools (ANF, security groups, etc.) Experience with AWS cloud security technologies, including IAM, Guard duty, Wiz, Prisma Cloud etc. Experience with container technologies (Kubernetes, EKS, ECS, Swarm, Docker, etc.) and a Strong background in scripting and programming, including Terraform, Golang or Python, PowerShell Solid understanding of agile methodologies such as DevOps, CI/CD, application resiliency, and security Ability to analyze system metrics and other types of data to draw insightful conclusions Certifications in AWS, Azure, and/or Google Cloud. Strong experience in developing and architecting cloud-native applications with AWS services Cybersecurity certifications (i.e., CISSP, CEH, CCSP, GSEC, etc.)

Posted 4 days ago

Apply

5.0 - 9.0 years

15 - 25 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Full SAP life-cycle experience using HANA, DB2 database (Desirable) technologies Strong hold on SAP NetWeaver 7 or higher Knowledge in Business Continuity, High Availability & Disaster Recovery topics Experience in S/4 Private Cloud Edition (PCE) side by side scenarios in which S/4 interacts with BTP applications

Posted 4 days ago

Apply

2.0 - 5.0 years

3 - 6 Lacs

Bengaluru

Work from Office

Project Role : Operations Engineer Project Role Description : Support the operations and/or manage delivery for production systems and services based on operational requirements and service agreement. Must have skills : AWS Administration Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education DESCRIPTIONJob Title:AWS Cloud Operations Senior Analyst (CL 10)Reports to:Offshore Team MemberLocation:BDC7C OR DDCWork Hours:24 X 7 & On_CallDomain:AWS Infrastructure & Linux AdministrationRelevant Experience required:4.5 to 6.5 years of experienceSUMMARYThis is client facing role and responsible for the overall support, administration, maintenance, troubleshooting, evolution and continuous improvement.ROLE and RESPONSIBILITIESThis role will specifically focus on AWS IaaS and managing Linux environments. Knowledge of AWS architecture is a must. Manage the cloud infrastructure environment through cross technology administration (OS, virtual networks, Security, Monitoring and Backup), Development and execution of script and automations. Manage the environment incidents with a focus on service restoration. Act as operations support for all AWS hosted Virtual Machines, network, storage and security.Mandatory SkillsProvide L2/L3 support on Linux (RHEL) & Windows servers hosted on AWS infrastructure.Installation, Configuration and administration of Linux servers.AMI management.Experience in AWS Landing Zone Solution.Proficient in AWS Subscription management and MarketPlace.Proficient in demand management using trusted advisor or AWS Forecast.Cluster configuration, management and troubleshooting.Auto scaling configuration.Performance tuning and storage optimization.RHEL OS upgrade, RedHat satellite server configuration and administration. Licensing knowledge and good understanding of patch and package management. Incident and Change management, Health and performance Monitoring - Check server health, performance and capacity alerts, take preventive and remedial action.Design, build and configure AWS Services to meet business process and application requirements. Execution and documentation of infrastructure changes and preparing work instructions.Reporting and participating in governance and audit activities.AWS:EC2, EBS, EKS, ECS, ELB, VPC, Route53 S3, ASG, ELB, Route53, RDS, CloudWatch, CloudFormation, AWS workspace, Beanstalk, IAM, Cloud Trail, CloudFront etc.Knowledge and experience of AWS Landing Zone solutions.Lambda, SNS, SQS, Storage Gateway, Secure File Transfer (sFTP). Desirable Working experience with Service Now for incident, change and problem management. Language/Scripting knowledge Shell Scripting/JSON/Python, Ansible, Terraform. Exposure to Lambda Python scripts. Tools Monitoring exposure in SignalFx, Splunk.Certification AWS Solution Architect, Redhat Linux & ITIL.KEY EXPECTATIONSBridge the relationships between offshore and onshore/client/stakeholders/third part vendor support teams.Maintain the confidence level of the client by adhering to the SLA and deliverables.Better understanding of client process, architecture and necessary execution.Quick response, timely follow-up and ownership till closure.PERSONAL ATTRIBUTESHigh personal drive; results oriented; makes things happen.Excellent communication, interpersonal skills.Innovative and creative and adaptive to new environment.Strong data point analytical, teamwork skills.Good attitude to learn and develop. Qualification 15 years full time education

Posted 4 days ago

Apply

2.0 - 5.0 years

3 - 6 Lacs

Bengaluru

Work from Office

Project Role : Operations Engineer Project Role Description : Support the operations and/or manage delivery for production systems and services based on operational requirements and service agreement. Must have skills : AWS Administration Good to have skills : AIX UnixMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time educationDESCRIPTIONJob Title:AWS Cloud Operations Senior Analyst (CL 10)Reports to:Offshore Team MemberLocation:BDC7C OR DDCWork Hours:24 X 7 & On_CallDomain:AWS Infrastructure & Linux AdministrationRelevant Experience required:4.5 to 6.5 years of experienceSUMMARYThis is client facing role and responsible for the overall support, administration, maintenance, troubleshooting, evolution and continuous improvement.ROLE and RESPONSIBILITIESThis role will specifically focus on AWS IaaS and managing Linux environments. Knowledge of AWS architecture is a must. Manage the cloud infrastructure environment through cross technology administration (OS, virtual networks, Security, Monitoring and Backup), Development and execution of script and automations. Manage the environment incidents with a focus on service restoration. Act as operations support for all AWS hosted Virtual Machines, network, storage and security.Mandatory SkillsProvide L2/L3 support on Linux (RHEL) & Windows servers hosted on AWS infrastructure.Installation, Configuration and administration of Linux servers.AMI management.Experience in AWS Landing Zone Solution.Proficient in AWS Subscription management and MarketPlace.Proficient in demand management using trusted advisor or AWS Forecast.Cluster configuration, management and troubleshooting.Auto scaling configuration.Performance tuning and storage optimization.RHEL OS upgrade, RedHat satellite server configuration and administration. Licensing knowledge and good understanding of patch and package management. Incident and Change management, Health and performance Monitoring - Check server health, performance and capacity alerts, take preventive and remedial action.Design, build and configure AWS Services to meet business process and application requirements. Execution and documentation of infrastructure changes and preparing work instructions.Reporting and participating in governance and audit activities.AWS:EC2, EBS, EKS, ECS, ELB, VPC, Route53 S3, ASG, ELB, Route53, RDS, CloudWatch, CloudFormation, AWS workspace, Beanstalk, IAM, Cloud Trail, CloudFront etc.Knowledge and experience of AWS Landing Zone solutions.Lambda, SNS, SQS, Storage Gateway, Secure File Transfer (sFTP). Desirable Working experience with Service Now for incident, change and problem management. Language/Scripting knowledge Shell Scripting/JSON/Python, Ansible, Terraform. Exposure to Lambda Python scripts. Tools Monitoring exposure in SignalFx, Splunk.Certification AWS Solution Architect, Redhat Linux & ITIL.KEY EXPECTATIONSBridge the relationships between offshore and onshore/client/stakeholders/third part vendor support teams.Maintain the confidence level of the client by adhering to the SLA and deliverables.Better understanding of client process, architecture and necessary execution.Quick response, timely follow-up and ownership till closure.PERSONAL ATTRIBUTESHigh personal drive; results oriented; makes things happen.Excellent communication, interpersonal skills.Innovative and creative and adaptive to new environment.Strong data point analytical, teamwork skills.Good attitude to learn and develop. Qualification 15 years full time education

Posted 4 days ago

Apply

5.0 - 9.0 years

10 - 20 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

SAP Basis-SQL, DB2, HANA DB, SAP ECS, BASIS , Migration and Restore . ECS ** Mandatory to have primary activities are from BASIS , Migration and Restore experience using HANA, DB2 database (Desirable) technologies Strong hold on SAP NetWeaver 7 or higher Working knowledge on ADS , NetWeaver , BTP Full SAP life-cycle experience using HANA, DB2 database (Desirable) technologies Strong hold on SAP NetWeaver 7 or higher

Posted 4 days ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

Yahoo Finance is the world's leading finance destination, providing investors with news, information, and tools to make confident financial decisions. Trusted by over 150 million visitors globally each month, representing over $20 trillion dollars in investable assets, Yahoo Finance delivers high-quality real-time market data across desktop, mobile, and streaming platforms. With breaking news from thousands of sources, original editorial perspectives, objective analyst ratings and research, analytical charts and technical tools, personalized mobile alerts, and more, Yahoo Finance equips investors with knowledge and insights to achieve financial freedom and greater prosperity. Yahoo is a top provider of media and technology brands, reaching over a billion people worldwide. Yahoos Media Engineering organization utilizes the latest technologies to build brands that members love, including Yahoo, AOL, Engadget, TechCrunch, Autoblog, In The Know, and more. With a focus on building at a massive scale to reach hundreds of millions of users, our teams strive to create world-class user experiences, delivering trusted content and data across all brands. We are committed to building and revitalizing this essential, trusted resource for investors and savers under a new leadership team. As an experienced engineer, you will collaborate closely with Engineering, Product, and Design teams to enhance our product offerings. You will develop applications and tools essential for supporting our business operations and ensuring the quality of our data and services. This role involves architecting, designing, scoping, building, maintaining, and iterating on systems needed to deliver world-class finance products and features. Responsibilities: - Be part of an agile scrum team, demonstrating progress through proof of concept, sandboxing, and prototyping - Architect and design scalable, maintainable, secure, and reusable strategic solutions - Deploy, monitor, and manage ML models in production environments using MLOps best practices - Work closely with data scientists to transition models from development to production efficiently - Optimize ML models and infrastructure for efficiency, scalability, and cost-effectiveness - Design and implement frameworks and tools to empower developers and non-technical colleagues - Lead key team initiatives by managing and improving the software development life cycle - Seek opportunities to improve quality and efficiency in day-to-day workflow processes - Present and communicate progress across multiple groups, sharing knowledge and best practices - Perform code reviews for peers and recommend approaches to solving complex problems - Own, deploy, monitor, and operate large-scale production systems - Lead and mentor junior engineers in building production-grade systems and applications - Act as a technical liaison to translate business needs into technical solutions Requirements (must have): - MS or PhD in Computer Science or related major - 5 to 10 years industry experience as a Back End Engineer, ML Engineer, or Research Engineer - Deep functional knowledge and hands-on experience with AWS or GCP cloud services, RESTful Web Services, Containerization (Docker, ECS, Kubernetes), and modern AI tools - Experience with AI/ML Ops tools and platforms, basic Data Science concepts, and version control tools - Capable of implementing resilient web architecture and building web products end to end - Familiarity with financial datasets and experience with time series analysis - Ability to work in a hybrid model, commuting 3 days a week to an office in Bangalore Important notes: - All applicants must apply for Yahoo openings directly with Yahoo - Offer letters and documents will be issued through the system for e-signatures - Yahoo offers flexibility around employee location and hybrid working For further inquiries about the role, please discuss with the recruiter.,

Posted 6 days ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a DataOps Engineer, you will play a crucial role within our data engineering team, blending elements of software engineering, DevOps, and data analytics. Your primary responsibility will involve the development and maintenance of secure, scalable, and high-quality data pipelines and infrastructure to support our clients" advanced analytics, machine learning, and real-time decision-making needs. Your key responsibilities will include designing, developing, and managing robust ETL/ELT pipelines utilizing Python and modern DataOps methodologies. Additionally, you will implement data quality checks, pipeline monitoring, and error handling mechanisms. Building data solutions on AWS cloud services like S3, ECS, Lambda, and CloudWatch will be an integral part of your role. Furthermore, containerizing applications with Docker and orchestrating them using Kubernetes for scalable deployments will be part of your daily tasks. You will work with infrastructure-as-code tools and CI/CD pipelines to automate deployments efficiently. Moreover, designing and optimizing data models using PostgreSQL, Redis, and PGVector for high-performance storage and retrieval will be essential. Supporting feature stores and vector-based storage for AI/ML applications will also fall under your responsibilities. You will drive Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives to ensure successful sprint delivery. Additionally, reviewing pull requests (PRs), conducting code reviews, and enforcing security and performance standards will be part of your routine. Collaborating closely with product owners, analysts, and architects to refine user stories and technical requirements will also be crucial for the success of the projects. As for the required skills and qualifications, we are looking for someone with at least 10 years of experience in Data Engineering, DevOps, or Software Engineering roles focusing on data products. Proficiency in Python, Docker, Kubernetes, and AWS (especially S3 and ECS) is essential. Strong knowledge of relational and NoSQL databases like PostgreSQL, Redis, and experience with PGVector would be a strong advantage. Deep understanding of CI/CD pipelines, GitHub workflows, and modern source control practices is also required. Experience working in Agile/Scrum environments with excellent collaboration and communication skills is a must. A passion for developing clean, well-documented, and scalable code in a collaborative environment is highly valued. Familiarity with DataOps principles, encompassing automation, testing, monitoring, and deployment of data pipelines, is also beneficial.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You will be responsible for leading as a Cloud App Developer at Wipro Limited, a leading technology services and consulting company that specializes in creating innovative solutions for complex digital transformation needs. With a global presence spanning over 65 countries and a workforce of more than 230,000 employees and partners, we are committed to helping our clients, colleagues, and communities thrive in a dynamic world. As a Lead Cloud App Developer, you will need to possess expertise in Terraform, AWS, and DevOps. Additionally, you should hold certifications such as AWS Certified Solution Architect Associate and AWS Certified DevOps Engineer Professional. Your role will involve leveraging your IT experience of more than 6 years to set up and maintain ECS solutions, design AWS solutions with various services like VPC, EC2, WAF, ECS, ALB, IAM, KMS, and others. Furthermore, you will be expected to have experience with AWS services like SNS, SQS, EventBridge, RDS, Aurora DB, Postgres DB, DynamoDB, Redis, AWS Glue jobs, AWS Lambda, CI/CD using Azure DevOps, GitHub for source code management, and building cloud-native applications. Your responsibilities will also include working with container technologies like docker, configuring logging and monitoring solutions like CloudWatch and OpenSearch, and managing system configurations using Terraform and Terragrunt. In addition to technical skills, you should possess strong communication and collaboration abilities, be a team player, have excellent analytical and problem-solving skills, and understand Agile methodologies. Your role will also involve training others in procedural and technical topics, recommending process and architecture improvements, and troubleshooting distributed systems. Join us at Wipro to be a part of our journey to reinvent our business and industry. We are looking for individuals who are inspired by reinvention and are committed to evolving themselves, their careers, and their skills. Be a part of a purpose-driven business that empowers you to shape your reinvention. Realize your ambitions at Wipro, where applications from individuals with disabilities are warmly welcomed. Experience Required: 5-8 Years To learn more about Wipro Limited, visit www.wipro.com.,

Posted 6 days ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

About ResMed With a 30-year history of innovation as a global leader in health technology, ResMed powers digital experiences and engagement to enhance the lives of millions of people every day through connected care. We build, deliver, and manage a portfolio of data management platforms and mobile offerings in support of our core businesses. We thrive on simple and elegant architecture and agility. Innovation and imagination aren't just something we aspire to - they are integral to the way we work. We work hard to provide the opportunity for every employee to do amazing things every day as we shape the future together. About the Project Join us in building a next-generation digital health product using SMART on FHIR that empowers providers with intelligent insights at the point of care. Leveraging modern interoperability standards like FHIR, we're enabling health providers to make faster, smarter decisions that improve patient outcomes at scale. Our goal is to help people sleep - and ultimately live - better. Serve as a technical leader: working closely with cross-functional teammates, delivering software within complex problem spaces, cycling through building, deploying, iterating. Apply senior-level knowledge and tackle intrinsically hard problems in enterprise system architecture, microservices, engineering best practices, performance, and scalability. Be a Quality Champion with experience in test-driven development, automated testing, CI/CD pipeline integration, and performance testing. Design and develop test and deployment strategies, execute discoveries and spikes, and prototype solutions. Write critical-path code, applying correct trade-offs and simplifying solutions. Provide reliable estimates of complexity and effort, explore technical trade-offs, and inform risks to deliveries. Support cloud-native application development using AWS services including S3, Lambda, EC2, ELB, SQS, and SNS. Ensure developed software meets scalability, fault tolerance, high performance, and high security criteria. Move swiftly through ambiguity with high awareness, building flexible solutions and efficient release pipelines. Take accountability for code in production, including on-call rotations and urgent issue resolution. Perform DevOps duties including database tasks, managing code repositories, and monitoring systems using tools like X-Ray, CloudWatch, and DataDog. Generate and publish test, defect, traceability, and system performance metrics. BS/MS in Computer Science or equivalent experience, with recent coding experience in Java. 7+ years of professional software development experience, including high-volume cloud-native applications and SaaS solutions. Experience in Spring Boot, REST APIs, and FHIR standards. Web development skills including ReactJS, TypeScript, JavaScript, HTML5, CSS3. Experience with backend development using Java. Experience with n-tier architecture and enterprise software applications. Experience with containerization technologies like Docker, Kubernetes, EKS, and ECS. Experience with cloud platforms such as AWS and infrastructure as code using Terraform. Strong understanding of design patterns, algorithms, and object-oriented principles. Experience with relational (SQL) and NoSQL databases. Experience with CI/CD pipelines, infrastructure as code, and release automation. Experience with modern testing tools such as Selenium, RestAssured, Postman, and JMeter. Experience working in regulated environments and with data privacy is a plus. A supportive environment that focuses on people development and best practices. Opportunity to design, influence, and be innovative. Work with global teams and share new ideas. Be supported both inside and outside of the work environment. The opportunity to build something meaningful and see a direct impact on people's lives. Joining us is more than saying yes to making the world a healthier place. It's discovering a career that's challenging, supportive, and inspiring. Where a culture driven by excellence helps you not only meet your goals but also create new ones. We focus on creating a diverse and inclusive culture, encouraging individual expression in the workplace and thrive on the innovative ideas this generates. If this sounds like the workplace for you, apply now! We commit to respond to every applicant.,

Posted 1 week ago

Apply

5.0 - 10.0 years

25 - 32 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Location: Hyderabad, Pune, Bangalore. Positions 1. Basis with SQL, DB2 database 2. SAP Basis - Backup, Restore & Migrations 3. SAP Basis - BASIS activities like System Installations, Client Administration, System Refresh/Copy, and application update/upgrades 4. Automation Engineering & Cross Functions - Manage and support HANA and ASE databases, including performance tuning, backups, and recovery 5. SAF Basis & Database - Full SAP life-cycle experience using HANA, DB2 database (Desirable) technologies EXP in DR activities 6. NetWeaver Build - Full SAP life-cycle experience using HANA, DB2 database (Desirable) technologies Strong hold on SAP NetWeaver 7 or higher Working knowledge on ADS , NetWeaver , BTP 7. NetWeaver Build- Knowledge in Business Continuity, High Availability & Disaster Recovery topics Experience in S/4 Private Cloud Edition (PCE) side by side scenarios in which S/4 interacts with BTP applications

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The client is a global leader in delivering cutting-edge inflight entertainment and connectivity (IFEC) solutions. As a developer in this role, you will be responsible for building user interfaces using Flutter, React.js, or similar frontend frameworks. You will also develop backend services and APIs using Python, ensuring smooth data flow between the frontend and backend by working with REST APIs. Additionally, you will utilize Linux terminal and bash scripting for basic automation and tasks, manage code using Git, and set up CI/CD pipelines using tools like GitLab CI/CD. Deployment and management of services on AWS (CloudFormation, Lambda, API Gateway, ECS, VPC, etc.) will be part of your responsibilities. It is essential to write clean, testable, and well-documented code while collaborating with other developers, designers, and product teams. Requirements: - Minimum 3 years of frontend software development experience - Proficiency in GUI development using Flutter or other frontend stacks (e.g., React.js) - 3+ years of Python development experience - Experience with Python for backend and API server - Proficiency in Linux terminal and bash scripting - Familiarity with GitLab CI/CD or other CI/CD tools - AWS experience including CloudFormation, API Gateway, ECS, Lambda, VPC - Bonus: Data science skills with experience in the pandas library - Bonus: Experience with the development of recommendation systems and LLM-based applications If you find this opportunity intriguing and aligning with your expertise, please share your updated CV and relevant details with pearlin.hannah@antal.com.,

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Bengaluru

Work from Office

We are in look out for Java Experts. Below given is the skillset for reference. Interested Java Techie pls share your CV with the below given details to vinoth.jayaraman@photon.com Mandatory Skills: ------------------- Java : Version 17 / 21 , Spring Boot and Reactive Stream Concepts (like Flux / Mono) Cloud Exposure : AWS (especially ECS and EKS) Database : PostgreSQL / Cockroach Test Driven Development : System Behavior Test Cases (Cucumber-Based), and Unit Test Cases using Junit and Mockito Available Position: 10 Numbers Experience: 5 to 18 Years Job Location: Bangalore (Kadubeesanahalli) Work Mode: Work from Office Information to Share: ----------------------- Total Experience in IT (in Years): Relevant Experience in Java 17/21 with Spring boot (in Years): Relevant Experience in Reactive Stream (in Years): Relevant Experience in Postgres DB/Cockroach DB (in Years): Relevant Experience in System Behavior/Cucumber/JUnit/Mockito (in Years): Current CTC: Expected CTC: Notice Period: Last Working Date(If applicable): Current Location: Willing to Work for Bangalore

Posted 1 week ago

Apply

0.0 - 2.0 years

6 - 8 Lacs

Hyderabad

Work from Office

AutoRABIT Profile AutoRABIT is the leader in DevSecOps for SaaS platforms such as Salesforce. Its unique metadata-aware capability makes Release Management, Version Control, and Backup & Recovery complete, reliable, and effective. AutoRABIT s highly scalable framework covers the entire DevSecOps cycle, which makes it the favourite platform for companies, especially large ones who require enterprise strength and robustness in their deployment environment. AutoRABIT increases the productivity and confidence of developers which makes it a critical tool for development teams, especially large ones with complex applications. AutoRABIT has institutional funding and is well positioned for growth. Headquartered in the CA, USA and with customers worldwide, AutoRABIT is a place for bringing your creativity to the most demanding SaaS marketplace. Job Role We are seeking an enthusiastic Junior SRE who is passionate about learning AWS, cloud automation, and infrastructure best practices. This is a growth opportunity for someone who has basic hands-on experience with AWS workshops and scripting. Roles & Responsibilities Learn and assist in provisioning AWS resources using Terraform. Write simple automation scripts and Lambda functions (Python3 + Boto3). Explore AWS tools such as EKS, ECS, CodePipeline, etc. through guided tasks. Support monitoring setup and incident alerting configurations. Participate in team reviews and explore areas for infrastructure improvement. Responsibility to adhere to set internal controls. Desired Skills and Knowledge 0 2 years of experience or strong personal/project-based AWS exposure. Good understanding of AWS services and use-cases (especially EKS, ECS, EC2, S3, RDS). Basic scripting knowledge in Python with Boto3 or willingness to learn quickly. Exposure to AWS Hands-on Labs, Cloud Academy, or self-practice projects. Strong interest in cloud technologies, automation, and DevOps culture. Eagerness to grow into a senior engineering role with mentorship and training. Willing to work in rotational shifts and rotational week-offs. Education Bachelor s in computers or any related field. AWS certifications is preferred Location: Hyderabad, Hybrid - 3 Days from Office Experience: 0-2 Years Compensation: 6 - 8 LPA Website: www.autorabit.com

Posted 1 week ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

About your role Expert engineer is a seasoned technology expert who is highly skilled in programming, engineering and problem-solving skills. They can deliver value to business faster and with superlative quality. Their code and designs meet business, technical, non-functional and operational requirements most of the times without defects and incidents. So, if relentless focus and drive towards technical and engineering excellence along with adding value to business excites you, this is absolutely a role for you. If doing technical discussions and whiteboarding with peers excites you and doing pair programming and code reviews adds fuel to your tank, come we are looking for you. Understand system requirements, analyse, design, develop and test the application systems following the defined standards. The candidate is expected to display professional ethics in his/her approach to work and exhibit a high-level ownership within a demanding working environment. About you Essential Skills You have excellent software designing, programming, engineering, and problem-solving skills. Strong experience working on Data Ingestion, Transformation and Distribution using AWS or Snowflake Exposure to SnowSQL, Snowpipe, Role based access controls, ETL ELT tools like Nifi, Matallion DBT Hands on working knowledge around EC2, Lambda, ECS/EKS, DynamoDB, VPCs Familiar with building data pipelines that leverage the full power and best practices of Snowflake as well as how to integrate common technologies that work with Snowflake (code CICD, monitoring, orchestration, data quality, monitoring) Experience with designing, implementing, and overseeing the integration of data systems and ETL processes through Snaplogic Designing Data Ingestion and Orchestration Pipelines using AWS, Control M Establish strategies for data extraction, ingestion, transformation, automation, and consumption. Experience in Data Lake Concepts with Structured, Semi-Structured and Unstructured Data Experience in creating CI/CD Process for Snowflake Experience in strategies for Data Testing, Data Quality, Code Quality, Code Coverage Ability, willingness & openness to experiment evaluate adopt new technologies. Passion for technology, problem solving and team working. Go getter, ability to navigate across roles, functions, business units to collaborate, drive agreements and changes from drawing board to live systems. Lifelong learner who can bring the contemporary practices, technologies, ways of working to the organization. Effective collaborator adept at using all effective modes of communication and collaboration tools. Experience delivering on data related Non-Functional Requirements like- Hands-on experience dealing with large volumes of historical data across markets/geographies. Manipulating, processing, and extracting value from large, disconnected datasets. Building water-tight data quality gateson investment management data Generic handling of standard business scenarios in case of missing data, holidays, out of tolerance errorsetc. Experience and Qualification: B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 7 to 10 years of relevant experience Personal Characteristics Good interpersonal and communication skills. Strong team player Ability to work at a strategic and tactical level. Ability to convey strong messages in a polite but firm manner. Self-motivation is essential, should demonstrate commitment to high quality design and development. Ability to develop & maintain working relationships with several stakeholders. Flexibility and an open attitude to change. Problem solving skills with the ability to think laterally, and to think with a medium term and long-term perspective. Ability to learn and quickly get familiar with a complex business and technology environment.

Posted 1 week ago

Apply

4.0 - 7.0 years

15 - 20 Lacs

Mumbai

Work from Office

This position is for self-motivated and highly energetic individuals who can think of multiple solutions for a given problem and help in decision-making while working in a super-agile environment. Here's what you will get to explore: Automating cloud solutions using tools standard in the Cloud / DevOps industry, following best practices and an "Infrastructure as Code" mindset. Researching and maintaining current knowledge of cloud provider capabilities. Supporting operational and stakeholder teams to ensure business continuity and customer satisfaction. Automating monitoring tools to ensure system health and reliability, supporting high uptime requirements. Ensuring adherence to standards, policies, and procedures. We can see the next Entrepreneur At Seclore if you have: Have a technical degree (Engineering, MCA) from a reputed institute. Possess 4+ years of experience working with AWS. Have 3+ years of experience with Jenkins, Docker, Git, Ansible. Bring 5-6 years of total relevant experience. Embrace an automation-first mindset. Communicate effectively (verbal written) and manage priorities well. Have experience managing multiple production workloads on AWS. Understand the software lifecycle and appreciate DevOps / Automation both theoretically and practically. Have hands-on experience or knowledge in the following: Scripting: Python and Bash ConfigurationManagement: Ansible / Puppet Containers: Docker (Preferably ECS) Databases: Oracle RDS including performance tuning and maintenance Infrastructure as Code: Terraform / CloudFormation Building secure andscalable infrastructure Why do we call Seclorites Entrepreneurs not Employees We value and support those who take the initiative and calculate risks. We have an attitude of a problem solver and an aptitude that is tech agnostic. You get to work with the smartest minds in the business. We are thriving not living. At Seclore, it is not just about work but about creating outstanding employee experiences. Our supportive and open culture enables our team to thrive.

Posted 1 week ago

Apply

8.0 - 10.0 years

15 - 18 Lacs

Bengaluru

Remote

We are seeking a skilled DevOps Engineer with expertise in AWS, cloud infrastructure, and container technologies. You will enable software development teams to build, test, and deploy features efficiently with a consideration on flexibility. Your drive to innovate and continuously improve, will not only increase the capabilities of the delivery team, but will also play a critical role in identifying technical solutions in our business. Strong stakeholder management and the ability to provide technical guidance to internal teams are essential for success in this role. You will be working for our Australia based client. Established in 2002, they strive to modernize the movement of goods and provide supply chain participants the best on the go IT solutions and services. They support organizations across the globe, connecting people, goods & technology and their mission is to deliver seamless, secure, real-time data fueled connections that power the logistics of delivery. REQUIRED COMPETENCIES: Experience managing and operating Amazon Web Services (AWS) components, including but not limited to IAM, ELB, VPC, Api Gateway, EC2, S3, RDS, EKS, ECS, EFS, Elastic Cache, or Azure equivalent Experience using Linux and scripting languages (Bash, PowerShell or Python) Experience delivering infrastructure solutions as code (Terraform or CloudFormation). Proficiency in managing applications and environment configurations through Ansible or equivalent configuration management tools. Demonstrable experience in implementing containerization strategies using Kubernetes or AWS ECS or similar. Proficient in creating, maintaining, and optimizing Continuous Integration/Continuous Delivery (CI/CD) pipelines leveraging tools such as Git, Jenkins, AWS CodePipeline & CodeBuild. Strong knowledge of IT security practices and networking. Experience or exposure to the following technologies: Jira, Confluence, Git, SonarQube, Azure AD, Datadog/New relic. Experience working with Developers, DevOps, and Engineering teams in a dynamic environment to promote/implement the DevOps program throughout the organisation. DESIRED COMPETENCIES: Experience with software development, programming (C#, Java, .NET, NodeJS, etc.) and microservices architecture. QUALIFICATIONS: Candidate must possess at least a Bachelors/College Degree in Computer Science, Information Technology, Engineering (Computer/Telecommunication), or equivalent experience. More than five years' experience in DevOps Engineering, underpinned by a solid comprehension of the fundamentals of computer science and software engineering principles

Posted 1 week ago

Apply

2.0 - 7.0 years

5 - 15 Lacs

Chennai

Work from Office

About the Role We are seeking a proactive and experienced DevOps Engineer to manage and scale our new AWS-based cloud architecture. This role is central to building a secure, fault-tolerant, and highly available environment that supports our Sun, Drive, and Comm platforms. You'll play a critical role in automation, deployment pipelines, monitoring, and cloud cost optimization. Key Responsibilities Design, implement, and manage infrastructure using AWS services across multiple Availability Zones. Maintain and scale EC2 Auto Scaling Groups, ALBs, and secure networking layers (VPC, public/private subnets). Manage API Gateways, Bastion Hosts, and secure SSH/VPN access for developers and administrators. Setup and optimize Aurora SQL Clusters with multi-AZ active-active failover and backup strategies. Implement and maintain observability using CloudWatch for centralized logging, metrics, and alarms. Enforce infrastructure-as-code practices using Terraform/CloudFormation. Configure and maintain CI/CD pipelines (e.g., GitHub Actions, Jenkins, or CodePipeline). Ensure backup lifecycle management using S3 tiering and retention policies. Collaborate with engineering teams to enable DevSecOps best practices and drive automation. Continuously optimize infrastructure for performance, resilience, and cost (e.g., Savings Plans, S3 lifecycle policies). Must-Have Skills Strong hands-on experience with AWS core services: EC2(Linux and Windows), ALB, VPC, S3, Aurora (MySQL/PostgreSQL), CloudWatch, API Gateway, IAM, VPN. Deep understanding of multi-AZ, high availability, and auto-healing architectures. Experience with CI/CD tools and scripting (Bash, Python, or Shell). Working knowledge of networking and cloud security best practices (Security Groups, NACLs, IAM roles). Experience with Bastion architecture, Client VPNs, Route 53 and VPC peering. Familiarity with backup/restore strategies and monitoring/logging pipelines. Good-to-Have Exposure to containerization (Docker/ECS/EKS) or future readiness for CloudFront/ElastiCache integration. Knowledge of cost management strategies on AWS (e.g., billing reports, Trusted Advisor). Why Join Us? Work on a mission-critical mobility platform with a growing user base. Be a key part of transforming our legacy systems into a modern, scalable infrastructure. Collaborative and fast-paced environment with real ownership. Opportunity to drive automation and shape future DevSecOps practices.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You will be working as a software development partner for a very large enterprise financial products and services firm listed on NYSE. Your main responsibility will involve re-architecting and migrating 30-year legacy systems to cloud-based microservices on AWS. The application you will be working on is crucial for their trading, journal, and bookkeeping platforms. Your contributions at BeeHyv will include developing an event-driven ETL pipeline using AWS services to transfer end-of-day securities data from the mainframe to a relational database. You will also be building highly resilient and highly available containerized microservices to enhance the application's performance. Automation will be a key aspect of your role, involving the automation of build and deployment processes using Jenkins and Terraform for various environments such as Sandbox, Dev, QA, UAT, and Prod. You will be responsible for writing unit tests to ensure test coverage of over 80% for the entire application. Additionally, you will create a regression test suite to validate the accuracy and correctness of the software after enhancements and modifications. The technologies you will be working with include Java, Kafka, Groovy, Jenkins, AWS (specifically EC2, ECS, Glue), Docker, and AWS Glue. Your role will be pivotal in modernizing and optimizing the firm's systems for better efficiency and performance.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for designing and contributing to infrastructure improvements, architecture, development, and monitoring of a global distributed platform. Your role will involve developing infrastructure as code, maintaining common tools and infrastructure, such as CI/CD pipelines, monitoring, cluster management, config management, etc. Additionally, you will be writing code and contributing to the software architecture of a highly concurrent, high-throughput IaC abstraction layer on AWS. You will also contribute to improving the security posture of AWS Infrastructure using the AWS well-architected framework. Your focus areas will include encryption of data (S3, EBS, etc), deploying a new VDI solution, IAM refactoring, session manager compute access, role-based access to DB, infrastructure tagging, and automation. Required Skill Set: - Minimum 6 to 8 years of DevOps/SRE work experience. - Expertise in AWS core services, Advanced Linux, Puppet, Packer. - Experience in Docker, ECR, ECS, and EKS. - Expertise in AWS networking and security. - Expertise in building AMIs using Packer. - Expertise in authoring IaC using Terraform. - Expertise in implementing Observability solutions. - Expertise in authoring scripts or code using Python, Ruby. - Experience in the administration of Ubuntu OS. - Bonus - Proven experience in application deployment automations (building CI/CD pipelines) and security knowledge. You should have Terraform experience in multiple clouds, not limited to AWS. Your role will be crucial in contributing to the efficiency and security of the infrastructure, ensuring seamless operations across the platform.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer - Backend at ZoTok TradeTech in Hyderabad, you will have the opportunity to unleash your coding skills and creativity to develop innovative solutions that shape the future of fintech. Collaborating with a team of visionaries, you will be responsible for designing and creating robust backend solutions that drive our cutting-edge products. Your key responsibilities will include building scalable APIs and services to facilitate seamless communication between applications and databases. You will have the freedom to innovate and experiment, bringing fresh ideas and creative solutions to the table to propel our technology forward. In addition, you will dive deep into technical challenges, troubleshoot issues, optimize performance, and ensure system reliability. At ZoTok, we value collaboration and knowledge sharing. You will work closely with cross-functional teams to bring ideas to life and deliver exceptional results. By staying abreast of emerging technologies and best practices, you will continuously expand your skill set and drive innovation within the team. To excel in this role, you should have a strong proficiency in Node.js, Loopback, NestJs, RDBMS (MySQL), AWS, Lambda, ECS, Microservices, and related technologies. Your ability to write efficient and optimized code for high performance and scalability will be crucial. Experience in designing RESTful APIs, implementing Microservices architecture, and understanding software development principles such as SOLID, DRY, and KISS is essential. Familiarity with cloud computing, deploying applications on AWS, containerization technologies like Docker, and database design and optimization principles are also required. Effective collaboration with developers, designers, and stakeholders, along with strong analytical and problem-solving skills, will be key to your success in this role. Join us at ZoTok, where diversity is celebrated, innovation is fostered, and your best ideas are empowered. If you are eager to embark on an exciting journey filled with opportunities to make a difference, apply now and let's create something extraordinary together.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a Cloud Data Engineer at Barclays, where you'll spearhead the evolution of the digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. You may be assessed on key critical skills relevant for success in the role, such as risk and control, change and transformations, business acumen, strategic thinking, and digital technology, as well as job-specific skill sets. To be successful as a Cloud Data Engineer, you should have experience with: - Experience on AWS Cloud technology for data processing and a good understanding of AWS architecture. - Experience with computer services like EC2, Lambda, Auto Scaling, VPC, EC2. - Experience with Storage and container services like ECS, S3, DynamoDB, RDS. - Experience with Management & Governance KMS, IAM, CloudFormation, CloudWatch, CloudTrail. - Experience with Analytics services such as Glue, Athena, Crawler, Lake Formation, Redshift. - Experience with Solution delivery for data processing components in larger End to End projects. Desirable skill sets/good to have: - AWS Certified professional. - Experience in Data Processing on Databricks and unity catalog. - Ability to drive projects technically with right first deliveries within schedule and budget. - Ability to collaborate across teams to deliver complex systems and components and manage stakeholders" expectations well. - Understanding of different project methodologies, project lifecycles, major phases, dependencies and milestones within a project, and the required documentation needs. - Experienced with planning, estimating, organizing, and working on multiple projects. This role will be based out of Pune. Purpose of the role: To build and maintain systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architecture pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage appropriate data volumes and velocity and adhere to required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Takes responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision making within own area of expertise. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver your work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organization's products, services, and processes within the function. - Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. - Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. - Guide and persuade team members and communicate complex/sensitive information. - Act as a contact point for stakeholders outside of the immediate function, while building a network of contacts outside the team and external to the organization. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You should have strong experience in Java, Spring, Spring Boot, SQL, NoSQL (specifically Cassandra), API Integration, REST Services, and Testing (TDD). It is preferred if you have AWS Developer or AWS Architect Certifications. Experience with development in AWS including ECS or EKS, Aurora Postgres DB, Terraform, and Build & Development is required. Familiarity with Internal Chase Contractors: Atlas AWS is a plus. Your role will involve working on advanced technical projects and collaborating with a team of professionals. You will be responsible for developing, integrating, and testing various components of the software. Your expertise in Java, Spring frameworks, databases, and AWS services will be crucial for the successful completion of assigned tasks. You should possess a deep understanding of AWS services, cloud technologies, and modern development practices. Prior experience in working on cloud-based projects and utilizing tools like Terraform for infrastructure management will be beneficial. Strong problem-solving skills and the ability to work in a fast-paced environment are essential for this role. Mphasis is a global technology company that leverages next-generation technologies to drive business transformation for enterprises worldwide. The company's Front2Back Transformation approach focuses on delivering hyper-personalized digital experiences using cloud and cognitive technologies. Mphasis Service Transformation methodology helps businesses adapt to evolving digital landscapes by leveraging core reference architectures and specialized tools. If you are passionate about technology, innovation, and delivering exceptional solutions to clients, this role at Mphasis offers an exciting opportunity to work on cutting-edge projects and make a meaningful impact on global businesses.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a DataOps Engineer, you will play a crucial role within our data engineering team, operating in the realm that merges software engineering, DevOps, and data analytics. Your primary responsibility will involve creating and managing secure, scalable, and production-ready data pipelines and infrastructure that are vital in supporting advanced analytics, machine learning, and real-time decision-making capabilities for our clientele. Your key duties will encompass designing, developing, and overseeing the implementation of robust, scalable, and efficient ETL/ELT pipelines leveraging Python and contemporary DataOps methodologies. You will also be tasked with incorporating data quality checks, pipeline monitoring, and error handling mechanisms, as well as constructing data solutions utilizing cloud-native services on AWS like S3, ECS, Lambda, and CloudWatch. Furthermore, your role will entail containerizing applications using Docker and orchestrating them via Kubernetes to facilitate scalable deployments. You will collaborate with infrastructure-as-code tools and CI/CD pipelines to automate deployments effectively. Additionally, you will be involved in designing and optimizing data models using PostgreSQL, Redis, and PGVector, ensuring high-performance storage and retrieval while supporting feature stores and vector-based storage for AI/ML applications. In addition to your technical responsibilities, you will be actively engaged in driving Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives to ensure successful sprint delivery. You will also be responsible for reviewing pull requests (PRs), conducting code reviews, and upholding security and performance standards. Your collaboration with product owners, analysts, and architects will be essential in refining user stories and technical requirements. To excel in this role, you are required to have at least 10 years of experience in Data Engineering, DevOps, or Software Engineering roles with a focus on data products. Proficiency in Python, Docker, Kubernetes, and AWS (specifically S3 and ECS) is essential. Strong knowledge of relational and NoSQL databases like PostgreSQL, Redis, and experience with PGVector will be advantageous. A deep understanding of CI/CD pipelines, GitHub workflows, and modern source control practices is crucial, as is experience working in Agile/Scrum environments with excellent collaboration and communication skills. Moreover, a passion for developing clean, well-documented, and scalable code in a collaborative setting, along with familiarity with DataOps principles encompassing automation, testing, monitoring, and deployment of data pipelines, will be beneficial for excelling in this role.,

Posted 1 week ago

Apply

1.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Lead Developer, you will report to the Head of Technology based in London. Your core responsibilities will include leading the development of new features and platform improvements, managing, mentoring, and motivating team members, liaising with UK-based stakeholders to ensure team alignment with business and technical objectives, planning, coordinating, and delivering technical projects to the agreed schedule, championing and enforcing technical standards, ensuring the Software Development Life Cycle (SDLC) is followed within the team, and assisting with hiring, onboarding, and developing the team. In this role, you will work with an exciting and modern tech stack built for scale, reliability, and productivity. To succeed, you should have solid experience with tools such as Python (SQLAlchemy, Flask, Numpy, Pandas), MySQL, AWS (ECS, S3, Lambda, RDS), RabbitMQ, Docker, Linux, GitLab, Generative AI tools, and AI Tools. The required qualifications and experience for this role include a university degree in a STEM subject from a reputable institution, at least 8 years of professional software development experience, with at least 2 years in a lead/management role, proven experience liaising with remote stakeholders, familiarity with the tech stack or equivalent technologies, and a basic understanding of financial markets and derivative products. You should possess excellent teamwork skills, professional fluency in English (both written and spoken), excellent interpersonal and communication skills to collaborate effectively across global teams and time zones, a strong understanding of distributed software systems, an analytical and inquisitive mindset, and a desire to take on responsibility and make a difference. In terms of benefits, the company offers a competitive compensation package, including a competitive salary based on experience and role fit, annual/performance bonus, health insurance, life insurance, meal benefits, learning & development opportunities relevant to your role and career growth, enhanced leave policy, and a transport budget for roles requiring commute outside of business hours. This is a full-time position that requires in-person work.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies