Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
Hyderābād
On-site
Project description Luxoft DXC Technology Company is an established company focusing on consulting and implementation of complex projects in the financial industry. At the interface been technology and business, we convince with our know-how, well-founded methodology and pleasure in success. As a reliable partner to our renowned customers, we support them in planning, designing and implementing the desired innovations. Together with the customer, we deliver top performance! For one of our clients in the Insurance segment, we are searching for a .Net Full Stack Developer. Responsibilities Delivering assigned tasks within the delivery cycle of an application development project. Tasks may include installing new systems applications; updating applications; performing configuration and testing activities; applications programming for assigned modules within a larger program. You will be working under supervision from the Technical lead/Project Manager or a Senior Developer to accomplish assigned tasks. At the same time contribute a design for specific deliverables and assist in the development of technical solutions. Job duties will include design, development and testing, using .Net technologies. Help maintain a rigorous software build and testing framework for continuous building and testing the developed software and keep track of failed builds or build issues. Prepare software technical documentation based on functional documentation and specifications, taking into account any specified functional and technical requirements. You will be part of a fast growing and exciting division whose culture is entrepreneurial, professional, rooted in teamwork and innovation. You will participate as part of a team and maintain good relationships with team members and customers. You are expected to work within an international environment, using a broad set of technologies and frameworks. Skills Must have At least 10 years of total proven hands on experience working on .Net technologies out of which at least 5+ years on full stack development with C#, .NET, Angular (in support versions), SQL, Java and Restful APIs. Strong proficiency in .NET framework and C# programming language. Familiarity with microservices architecture and its implementation. Solid understanding of web development best practices, design patterns, and architecture. IBM DB2 Knowledge. Experience with internal private cloud implementations via OpenStack and Open Shift platforms via IAC (Terraform). Enterprise content management architectures. Basic Knowledge in Linux. Nice to have Insurance industry experience. Prism Doc for Java application knowledge. Other Languages English: C1 Advanced Seniority Lead Hyderabad, IN, India Req. VR-115132 C#/VB.NET BCM Industry 17/06/2025 Req. VR-115132
Posted 2 days ago
0 years
0 Lacs
Hyderābād
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview * Core Technology Infrastructure (CTI), part of the Global Technology & Operations organization, consists of more than 6,600 employees worldwide. With a presence in more than 35 countries, TI designs, builds and operates end-to-end technology infrastructure solutions and manages critical systems and platforms across the bank. TI delivers industry-leading infrastructure products and services to the company’s employees, customers and clients around jn87uthe world. Job Description* Terraform Software Developer – Candidate would be responsible for development for automation tools focused on Terraform Enterprise. Experience should include Terraform development and administration (back end of platform), System Administration (primarily Linux), integration with other automation tools like Horizon, Ansible Platform and GitHub. Understanding of SDLC processes and tools. Experience with cloud infrastructure as code, API’s, YAML, HCL, Python. Role also requires operational experience with monitoring of systems, incident, and problem management. . Responsibilities * Experience on using Terraform. Review bitbucket feature files, branching strategy, maintain bitbucket branches. Evaluate services of Azure & AWS and use Terraform to develop modules. Improve and optimize deployment challenges and help in delivering reliable solution. Interact with technical leads and architects to discover solutions that help solve challenges faced by Product Engineering teams. Be part of an enriching team and solve real Production engineering challenges. Improve knowledge in the areas of DevOps & Cloud Engineering by using enterprise tools and contributing to projects success. Programming or scripting skills in Python/Powershell. Any related Certification on cloud is nice to have. Ensure that all system deliverables meet quality objectives in functionality, performance, stability, security accessibility, and data quality. Provide work breakdown and estimates for tasks on agreed scope and development milestones to meet overall project timelines. Experience with the Agile/Scrum methodology. Strong verbal and written communication skills. Highly detailed oriented. Self-motivated, with the ability to work independently and as part of a team. Strong willingness & comfort taking on and challenging development approaches. Strong analytical and communication skills, ability to effectively work with both technical and non-technical resources. Must have strong debugging and troubleshooting skills. Able to implement and maintain Continuous Integration/Delivery (CI/CD) pipelines for the services. Able to implement and maintain automation required to improve code logistics from development to production. Assisting the team in instrumenting code for system availability. Maintaining and upgrading the deployment platforms as well as system infrastructure with Infrastructure-as-Code tools. Performing system administration and adhoc duties. Requirements: Education* B.E. / B.Tech / M.E. / M.Tech / MCA Experience Range* 8+ years Foundational Skills* Terraform development experience Terraform Enterprise Administration/Operations GO Language Java or Dotnet programming knowledge Python or shell scripting Database query development experience Desired Skills* AWS Change Management Horizon Tools (Ansible, Jira, Confluence, BitBucket) CI/CD Tools (GitHub, Jenkins, Artifactory) GCP JIRA Agile Methodology Python Powershell HashiCorp Configuration Language (HCL) Infrastructure as Code (IaC) Cloud Integration (Azure, AWS, GCP) Linux Administration Site Reliability Engineering Work Timings* 10.30AM to 7.30 PM Job Location* Chennai, Hyderabad, Mumbai
Posted 2 days ago
5.0 years
6 - 9 Lacs
Hyderābād
Remote
Job Description Role Overview: A Data Engineer is responsible for designing, building, and maintaining robust data pipelines and infrastructure that facilitate the collection, storage, and processing of large datasets. They collaborate with data scientists and analysts to ensure data is accessible, reliable, and optimized for analysis. Key tasks include data integration, ETL (Extract, Transform, Load) processes, and managing databases and cloud-based systems. Data engineers play a crucial role in enabling data-driven decision-making and ensuring data quality across organizations. What will you do in this role: Develop comprehensive High-Level Technical Design and Data Mapping documents to meet specific business integration requirements. Own the data integration and ingestion solutions throughout the project lifecycle, delivering key artifacts such as data flow diagrams and source system inventories. Provide end-to-end delivery ownership for assigned data pipelines, performing cleansing, processing, and validation on the data to ensure its quality. Define and implement robust Test Strategies and Test Plans, ensuring end-to-end accountability for middleware testing and evidence management. Collaborate with the Solutions Architecture and Business analyst teams to analyze system requirements and prototype innovative integration methods. Exhibit a hands-on leadership approach, ready to engage in coding, debugging, and all necessary actions to ensure the delivery of high-quality, scalable products. Influence and drive cross-product teams and collaboration while coordinating the execution of complex, technology-driven initiatives within distributed and remote teams. Work closely with various platforms and competencies to enrich the purpose of Enterprise Integration and guide their roadmaps to address current and emerging data integration and ingestion capabilities. Design ETL/ELT solutions, lead comprehensive system and integration testing, and outline standards and architectural toolkits to underpin our data integration efforts. Analyze data requirements and translate them into technical specifications for ETL processes. Develop and maintain ETL workflows, ensuring optimal performance and error handling mechanisms are in place. Monitor and troubleshoot ETL processes to ensure timely and successful data delivery. Collaborate with data analyst and other stakeholders to ensure alignment between data architecture and integration strategies. Document integration processes, data mappings, and ETL workflows to maintain clear communication and ensure knowledge transfer. What should you have: Bachelor’s degree in information technology, Computer Science or any Technology stream 5+ years of working experience with enterprise data integration technologies – Informatica PowerCenter, Informatica Intelligent Data Management Cloud Services (CDI, CAI, Mass Ingest, Orchestration) Integration experience utilizing REST and Custom API integration Experiences in Relational Database technologies and Cloud Data stores from AWS, GCP & Azure Experience utilizing AWS cloud well architecture framework, deployment & integration and data engineering. Preferred experience with CI/CD processes and related tools including- Terraform, GitHub Actions, Artifactory etc. Proven expertise in Python and Shell scripting, with a strong focus on leveraging these languages for data integration and orchestration to optimize workflows and enhance data processing efficiency Extensive Experience in design of reusable integration pattern using the cloud native technologies Extensive Experience Process orchestration and Scheduling Integration Jobs in Autosys, Airflow. Experience in Agile development methodologies and release management techniques Excellent analytical and problem-solving skills Good Understanding of data modeling and data architecture principles Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business, Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Management Process, Social Collaboration, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills: Job Posting End Date: 07/31/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R353285
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
Hyderābād
On-site
We are seeking a skilled Data Engineer with strong experience in Azure Data Services, Databricks, SQL, and PySpark to join our data engineering team. The ideal candidate will be responsible for building robust and scalable data pipelines and solutions to support advanced analytics and business intelligence initiatives." Key Responsibilities: Design and implement scalable and secure data pipelines using Azure Data Factory, Databricks, and Synapse Analytics. Develop and maintain efficient ETL/ELT workflows into and within Databricks. Write complex SQL queries for data extraction, transformation, and analysis. Develop and optimize data transformation scripts using PySpark. Ensure data quality, data governance, and performance optimization across all pipelines. Collaborate with data architects, analysts, and business stakeholders to deliver reliable data solutions. Perform data modelling and design for both structured and semi-structured data. Monitor data pipelines and troubleshoot issues to ensure data integrity and timely delivery. Contribute to best practices in cloud data architecture and engineering. Required Skills: 4–8 years of experience in data engineering or related fields. Strong experience with Azure Data Services (ADF, Synapse, Databricks, Azure Storage). Proficient with Databricks data warehouse – including data ingestion, Snow pipe, streams & tasks. Advanced SQL skills, including performance tuning and complex query building. Hands-on experience with PySpark for large-scale data processing and transformation. Experience with ETL/ELT frameworks, orchestration, and scheduling. Familiarity with data modelling concepts (dimensional/star schema). Good understanding of data security, role-based access, and auditing in Snowflake and Azure. Preferred/Good to Have: Experience with CI/CD pipelines and DevOps for data workflows. Exposure to Power BI or similar BI tools. Familiarity with Git, Terraform, or infrastructure-as-code (IaC) in cloud environments. Experience with Agile/Scrum methodologies Job Type: Full-time Work Location: In person
Posted 2 days ago
0 years
25 Lacs
India
On-site
Sr. Python Developer Experience working on Python SDK for AWS, GCP and OCI is a plus. Strong knowledge on Python development with hands on experience on API and ORM frameworks like Flask, SQLAlchemy Experience on Async and Event based task execution programming Strong knowledge in Windows and Linux environments Experience in automation tools like Ansible or Chef Hands on experience at least one of the cloud provider. Good at writing Terraform or cloud native template. Knowledge in container technology Hands on experience CI/CD Job Types: Full-time, Permanent Pay: ₹2,500,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person
Posted 2 days ago
10.0 years
7 - 20 Lacs
India
On-site
About MostEdge At MostEdge , we’re on a mission to accelerate commerce and build sustainable, trusted experiences . Our slogan — Protect Every Penny. Power Every Possibility. —reflects our commitment to operational excellence, data integrity, and real-time intelligence that help retailers run smarter, faster, and stronger. Our systems are mission-critical and designed for 99.99999% uptime , powering millions of transactions and inventory updates daily . We work at the intersection of AI, microservices, and retail commerce—and we win as a team. Role Overview We are looking for a Senior Database Administrator (DBA) to own the design, implementation, scaling, and performance of our data infrastructure. You will be responsible for mission-critical OLTP systems spanning MariaDB, MySQL, PostgreSQL, and MongoDB , deployed across AWS, GCP, and containerized Kubernetes clusters . This role plays a key part in ensuring data consistency, security, and speed across billions of rows and real-time operations. Scope & Accountability What You Will Own Manage and optimize multi-tenant, high-availability databases for real-time inventory, pricing, sales, and vendor data. Design and maintain scalable, partitioned database architectures across SQL and NoSQL systems. Monitor and tune query performance and ensure fast recovery, replication, and backup practices. Partner with developers, analysts, and DevOps teams on schema design, ETL pipelines, and microservices integration . Maintain security best practices, audit logging, encryption standards, and data retention compliance . What Success Looks Like 99.99999% uptime maintained across all environments. <100ms query response times for large-scale datasets. Zero unplanned data loss or corruption incidents. Developer teams experience zero bottlenecks from DB-related delays. Skills & Experience Must-Have 10+ years of experience managing OLTP systems at scale. Strong hands-on with MySQL, MariaDB, PostgreSQL, and MongoDB . Proven expertise in replication, clustering, indexing, and sharding . Experience with Kubernetes-based deployments , Kafka queues , and Dockerized apps . Familiarity with AWS S3 storage , GCP services, and hybrid cloud data replication. Experience in startup environments with fast-moving agile teams. Track record of creating clear documentation and managing tasks via JIRA . Nice-to-Have Experience with AI/ML data pipelines , vector databases, or embedding stores. Exposure to infrastructure as code (e.g., Terraform, Helm). Familiarity with LangChain, FastAPI , or modern LLM-driven architectures. How You Reflect Our Values Lead with Purpose : You enable smarter, faster systems that empower our retail customers. Build Trust : You create safe, accurate, and recoverable environments. Own the Outcome : You take responsibility for uptime, audits, and incident resolution. Win Together : You collaborate seamlessly across product, ops, and engineering. Keep It Simple : You design intuitive schemas, efficient queries, and clear alerts. Why Join MostEdge? Work on high-impact systems powering real-time retail intelligence . Collaborate with a passionate, values-driven team across AI, engineering, and operations. Build at scale—with autonomy, ownership, and cutting-edge tech. Job Types: Full-time, Permanent Pay: ₹727,996.91 - ₹2,032,140.73 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Morning shift US shift Supplemental Pay: Performance bonus Yearly bonus Work Location: In person Expected Start Date: 31/07/2025
Posted 2 days ago
5.0 - 4.0 years
0 Lacs
Mumbai, Maharashtra
On-site
Job Overview: We are looking for a highly experienced MERN Stack Developer with 5+ years of hands-on experience in building scalable, full-stack web applications. The ideal candidate will have strong knowledge of JavaScript and TypeScript, experience with frontend frameworks like React and Next.js, and deep understanding of modern cloud infrastructure, DevOps practices, and UI libraries like Tailwind CSS. The candidate must be comfortable working with MongoDB , MySQL , GraphQL , AWS , Lambda functions , Load Balancers , and should be familiar with infrastructure-as-code tools such as Kubernetes , Terraform , or CloudFormation . Key Responsibilities: ● Develop full-stack applications using MongoDB, Express.js, React.js, Node.js (MERN). ● Build highly optimized and SEO-friendly UIs using Next.js with TypeScript and Tailwind CSS. ● Develop and consume RESTful APIs and GraphQL APIs. ● Write and maintain unit/integration tests using Jest, Mocha, or Cypress. ● Manage and optimize MongoDB and MySQL databases for performance and reliability. ● Design, build, and deploy Lambda functions for serverless workflows. ● Configure and manage Load Balancers (e.g., AWS ELB) for traffic distribution and high availability. ● Work with AWS services like EC2, S3, RDS, Lambda, CloudFront, and API Gateway. ● Collaborate with DevOps teams to implement CI/CD pipelines, Docker containers, and infrastructure as code. ● Follow Agile methodologies, participate in code reviews, and contribute to architectural decisions. Required Skills: ● Strong knowledge of JavaScript (ES6+) and TypeScript. ● Expertise in React.js, Next.js, and Node.js. ● Proficient in Tailwind CSS for responsive and modern UI design. ● Solid experience with MongoDB and MySQL. ● Experience working with GraphQL APIs. ● Good understanding of unit and integration testing tools like Jest, Mocha, and Cypress. ● Hands-on experience with AWS services and serverless functions. ● Experience in setting up and managing Load Balancers. ● Familiar with Docker and container-based deployments. ● Familiarity with Kubernetes, Terraform, or AWS CloudFormation for managing infrastructure. Nice to Have: ● Experience with React Native or other mobile frameworks. ● Understanding of microservices and event-driven architecture. ● Exposure to CI/CD tools like GitHub Actions, GitLab CI, or Jenkins. ● Monitoring and logging tools like CloudWatch, Datadog, or New Relic. Qualifications: ● Bachelor’s degree in Computer Science, Engineering, or related field. ● Minimum 5 years of experience as a full-stack JavaScript/TypeScript developer. ● Strong communication and collaboration skills. ● Ability to work independently and in a fast-paced team environment. Job Type: Full-time Pay: ₹600,000.00 - ₹700,000.00 per year Schedule: Fixed shift Monday to Friday Ability to commute/relocate: Mumbai, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Experience: Full-stack development: 4 years (Required) Work Location: In person
Posted 2 days ago
40.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Overview KLA is a global leader in diversified electronics for the semiconductor manufacturing ecosystem. Virtually every electronic device in the world is produced using our technologies. No laptop, smartphone, wearable device, voice-controlled gadget, flexible screen, VR device or smart car would have made it into your hands without us. KLA invents systems and solutions for the manufacturing of wafers and reticles, integrated circuits, packaging, printed circuit boards and flat panel displays. The innovative ideas and devices that are advancing humanity all begin with inspiration, research and development. KLA focuses more than average on innovation and we invest 15% of sales back into R&D. Our expert teams of physicists, engineers, data scientists and problem-solvers work together with the world’s leading technology providers to accelerate the delivery of tomorrow’s electronic devices. Life here is exciting and our teams thrive on tackling really hard problems. There is never a dull moment with us. Group/Division Enabling the movement toward advanced chip design, KLA's Measurement, Analytics and Control group (MACH) is looking for the best and brightest research scientists, software engineers, application development engineers and senior product technology process engineers to join our team. The MACH team's mission is to collaborate with our customers to innovate technologies and solutions that detect and control highly complex process variations—at their source—rather than compensate for them at later stages of the manufacturing process. With over 40 years of semiconductor process control experience, chipmakers around the globe rely on KLA to ensure that their fabs ramp next-generation devices to volume production quickly and cost-effectively. Our MACH team develops leading-edge solutions for patterning process analytics and control technologies, thereby providing customers with critical insight at the feature level, field level and cross-wafer analysis. Our teams also develop advanced modeling simulation, data analytics and process control modeling technologies. As a member of the MACH team, you’ll be joining the most sophisticated and successful process-control company in the semiconductor industry--working across functions to solve the most complex technical problems in the digital age. Job Description/Preferred Qualifications Required Qualifications: Designing and implementing physical and virtual server infrastructures In-depth knowledge of one or more flavors of Linux: RedHat, CentOS, Rocky, Ubuntu Experience in System-D, iSCSI, Multi-pathing, and Linux HA Experience creating Visio Diagrams to document deployments Experience Racking and Cabling in a Datacenter Environment Ability to code and develop Shell and Python scripts or experience using Ansible/Terraform Strong understanding of TCP/IP fundamentals and Knowledge of protocols, DNS, DHCP, HTTP, LDAP, SMTP. Experience with Storage Appliance Prefer Qualifications: Knowledge of Docker and Kubernetes deployments Experience with VMWare or KVM Virtualization Environments Knowledge of Network infrastructure technologies, such as firewalls, switches, and routers Knowledge of troubleshooting network and storage issues. Knowledge of cloud (AWS / Azure) IaaS, EC2, EKS, AKS, AVD etc Skills and Abilities: Team Orientation & Interpersonal – Highly motivated teammate with ability to develop and maintain collaborative relationships with all levels within and external to the organization. Organization & Time Management – Able to plan, schedule, organize, and follow up on tasks related to the job to achieve goals within or ahead of established time frames. Multi-task - Ability to expeditiously organize, coordinate, manage, prioritize, and perform multiple tasks simultaneously to swiftly assess a situation, determine a logical course of action, and apply the appropriate response. Adaptability to Change – Able to be flexible and supportive, and able to assimilate change positively and proactively in rapid growth environment. Minimum Qualifications Doctorate (Academic) Degree and 0 years related work experience; Master's Level Degree and related work experience of 3 years; Bachelor's Level Degree and related work experience of 5 years We offer a competitive, family friendly total rewards package. We design our programs to reflect our commitment to an inclusive environment, while ensuring we provide benefits that meet the diverse needs of our employees. KLA is proud to be an equal opportunity employer Be aware of potentially fraudulent job postings or suspicious recruiting activity by persons that are currently posing as KLA employees. KLA never asks for any financial compensation to be considered for an interview, to become an employee, or for equipment. Further, KLA does not work with any recruiters or third parties who charge such fees either directly or on behalf of KLA. Please ensure that you have searched KLA’s Careers website for legitimate job postings. KLA follows a recruiting process that involves multiple interviews in person or on video conferencing with our hiring managers. If you are concerned that a communication, an interview, an offer of employment, or that an employee is not legitimate, please send an email to talent.acquisition@kla.com to confirm the person you are communicating with is an employee. We take your privacy very seriously and confidentially handle your information. Show more Show less
Posted 2 days ago
2.0 years
1 - 9 Lacs
Hyderābād
On-site
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer II at JPMorgan Chase within the Consumer and Community banking, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 2+ years of applied experience Expertize and good hands on experience with Kubernetes, Terraform and AWS. Full SDLC lifecycle for software deployment - Release management and SDLC including experienced in Jenkins as well as Spinnaker pipeline deployments Proficient with DevOps practices and CI/CD pipelines Advanced in one or more programming language(s) - Python, Java, Groovy Full SDLC lifecycle for software deployment - Release management and SDLC including experienced in Jenkins as well as Spinnaker pipeline deployments Third Party Vendor Data Management and Lifecycle, and Engagement for Trouble Tickets using DevOps Process Must adhere to weekly support Rotation schedules including weekends (standard DevOps Cadence) Preferred qualifications, capabilities, and skills Familiarity with modern front-end technologies Exposure to cloud technologies
Posted 2 days ago
0 years
0 Lacs
Hyderābād
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. Experience in automating infrastructure provisioning, Developer Operations (DevOps), integration, or delivery. Experience in networking, compute infrastructure (e.g., servers, databases, firewalls, load balancers) and architecting, developing, or maintaining cloud solutions in virtualized environments. Experience in scripting with Terraform and Networking, DevOps, Security, Compute, Storage, Hadoop, Kubernetes, or Site Reliability Engineering. Preferred qualifications: Certification in Cloud with experience in Kubernetes, Google Kubernetes Engine, or similar. Experience with customer-facing migration including service discovery, assessment, planning, execution, and operations. Experience with IT security practices like identity and access management, data protection, encryption, certificate and key management. Experience with Google Cloud Platform (GCP) techniques like prompt engineering, dual encoders, and embedding vectors. Experience in building prototypes or applications. Experience in one or more of the following disciplines: software development, managing operating system environments (Linux or related), network design and deployment, databases, storage systems. About the job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Provide domain expertise in cloud platforms and infrastructure to solve cloud platform tests. Work with customers to design and implement cloud based technical architectures, migration approaches, and application optimizations that enable business objectives. Be a technical advisor and perform troubleshooting to resolve technical tests for customers. Create and deliver best practice recommendations, tutorials, blog articles, and sample code. Travel up to 30% for in-region for meetings, technical reviews, and onsite delivery activities. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Posted 2 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: Java + AWS Devops Location: Any PAN India Location - Hybrid working Model Experience: 6+ years Key Focus: Java 11, Java 12, Microservices, Event-Driven Architecture, AWS, Kubernetes Job Summary: We are looking for a highly skilled Senior Backend Engineer to design, build, and deploy scalable, cloud-native microservices using Java 11, Java 21, Spring Boot, and AWS. The ideal candidate will have strong expertise in event-driven architecture, infrastructure-as-code (Terraform), and CI/CD automation while ensuring high code quality through rigorous testing and best practices. Key Responsibilities: ✅ Design & Development: Architect, develop, and maintain highly scalable microservices using Java 11, Java 21, and Spring Boot. Implement event-driven systems using AWS SNS, SQS, and Lambda. Ensure clean, modular, and testable code with proper design patterns and architectural principles. ✅ Testing & Quality: Promote test automation (unit, integration, contract, and E2E) using JUnit 5, Mockito, WireMock. Follow shift-left testing and CI/CD best practices to ensure reliability. ✅ Cloud & DevOps: Deploy applications using Docker, Kubernetes, and Helm on AWS. Manage Infrastructure as Code (IaC) with Terraform. Monitor systems using Grafana, Prometheus, Kibana, and Sensu. ✅ Database & Performance: Work with PostgreSQL, DynamoDB, MongoDB, Redis, and Elasticsearch for optimized data storage and retrieval. Ensure high availability, fault tolerance, and performance tuning. ✅ Agile & Collaboration: Work in Scrum with pair programming, peer reviews, and iterative demos. Take ownership of backend features from design to production deployment. Must-Have Qualifications: 6+ years of hands-on JVM backend development (Java 11 and Java 21). Expertise in Spring Boot, Spring Cloud, and Hibernate. Strong experience with AWS (SNS, SQS, Lambda, S3, CloudFront) + Terraform (IaC). Microservices & Event-Driven Architecture design and implementation. Test automation (JUnit 5, Mockito, WireMock) and CI/CD pipelines (Jenkins, Kubernetes). Database proficiency: PostgreSQL, DynamoDB, MongoDB, Redis. Containerization & Orchestration: Docker, Kubernetes, Helm. Monitoring & Logging: Grafana, Prometheus, Kibana. Fluent English & strong communication skills. Show more Show less
Posted 2 days ago
2.0 years
4 - 7 Lacs
Hyderābād
On-site
Working in Application Support means you'll use both creative and critical thinking skills to maintain application systems that are crucial to the daily operations of the firm. As an Application Support at JPMorgan Chase within the Employee Platform, you will use both creative and critical thinking skills to maintain application systems that are crucial to the daily operations of the firm. You'll work collaboratively in teams on a wide range of projects based on your primary area of focus: design or programming. While learning to fix application and data issues as they arise, you'll also gain exposure to software development, testing, deployment, maintenance, and improvement, in addition to production lifecycle methodologies and risk guidelines. Finally, you'll have the opportunity to develop professionally —and to grow your career in any direction you choose. Job responsibilities Participates in triaging, examining, diagnosing, and resolving incidents and work with others to solve problems at their root. Participate in weekend support rota to ensure adequate business support coverage during core hours and weekend (rota basis) as part of a global team. Assist in the monitoring of production environments for anomalies and address issues utilizing standard observability tools. Identify issues for escalation and communication and provide solutions to the business and technology stakeholders. Participates in root cause calls and drives actions to resolution with a keen focus on preventing incident. Recognizes the manual activity within your role and proactively works towards eliminating it through either system engineering or updating application code. Required qualifications, capabilities, and skills. Formal training or certification on Application Support concepts and 2+ years of experience or equivalent expertise troubleshooting, resolving, and maintaining information technology services. Experience in observability and monitoring tools and techniques. Experience with one or more general purpose programming languages (Python or C#) and/or automation scripting (PowerShell Script) Experience in observability and monitoring tools and techniques. Familiar with tools such as Splunk, ServiceNow, Dynatrace, etc. Experience in CI/CD tools like Jenkins, Bitbucket, GitLab, Terraform Eagerness to participate in learning opportunities to enhance one’s effectiveness in executing day-to-day project activities. Preferred qualifications, capabilities, and skills Experience and understanding of Genetec Security Desk Understanding of cloud infrastructure
Posted 2 days ago
5.0 years
0 Lacs
Gurgaon
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About the role We are looking for a Senior Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Sr. Data Engineer who has experience architecting and implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by modeling and transforming data to achieve actionable business outcomes. The Sr. Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualizations team. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals. Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options. Determine solutions that are best suited to develop a pipeline for a particular data source. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development. Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery. Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders. Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability). Stay current with and adopt new tools and applications to ensure high quality and efficient solutions. Build cross-platform data strategy to aggregate multiple sources and process development datasets. Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. Job Requirements Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred. 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment. 5+ years of experience with setting up and operating data pipelines using Python or SQL 5+ years of advanced SQL Programming: PL/SQL, T-SQL 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization. Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads. 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data. 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions. 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring. Strong analytical abilities and a strong intellectual curiosity In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design. Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration and teamwork skills & excellent written and verbal communications skills. Self-starter and motivated with ability to work in a fast-paced development environment. Agile experience highly desirable. Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools. Knowledge Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques. Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks. Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake #LI-DS1
Posted 2 days ago
3.0 years
0 Lacs
Gurgaon
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About the role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake #LI-DS1
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: DevOps Engineer Location: Chennai (full-time, at office) Years of Experience: 4-8 years Job Summary: We are seeking a skilled DevOps engineer with knowledge of automation, continuous integration, deployment and delivery processes. The ideal candidate should be a self-starter with hands-on production experience, and having excellent communication skills. Key Responsibilities: ● Infrastructure as Code: first principles on cloud infrastructure, system design, and application deployments. ● CI/CD pipelines: to design, implement, troubleshoot, and maintain CI/CD pipelines. ● System administration: skills with systems, networking, and security fundamentals. ● Proficiency in coding: with hands-on experience in programming languages, and ability to write, review, and troubleshoot code for infrastructure. ● Monitoring and observability: to track performance and health of services and configure alerts with interactive dashboards for reporting. ● Security best practices and familiarity with audits, compliance, and regulation. ● Communication skills: to clearly and effectively discuss and collaborate across cross-functional teams. ● Documentation: using Agile methodologies, Jira, and Git. Qualification: ● Education: Bachelor's degree in CS, IT, or a related field (or equivalent work experience). ● Skills*: Infrastructure: Docker, Kubernetes, ArgoCD, Helm, Chronos, GitOps. Automation: Ansible, Puppet, Chef, Salt, Terraform, OpenTofu. CI/CD: Jenkins, CircleCI, ArgoCD, GitLab, GitHub Actions. Cloud platforms: Amazon Web Services (AWS), Azure, Google Cloud. Operating Systems: Windows, *nix distributions (Fedora, Red Hat, Ubuntu, Debian), *BSD, Mac OS X. Monitoring and observability: Prometheus, Grafana, Elasticsearch, Nagios. Databases: MySQL, PostgreSQL, MongoDB, Qdrant, Redis. Programming Languages: Python, Bash, JavaScript, TypeScript, Golang. Documentation: Atlassian Jira, Confluence, Git. (* Proficient in one or more tools in each category.) Additional Requirements: • Include GitHub or GitLab profile link in the resume. • Only candidates with a Computer Science or Information Technology engineering background will be considered. • Primary Operating System should be Linux (Ubuntu or any distribution) or macOS. Show more Show less
Posted 2 days ago
8.0 years
4 - 8 Lacs
Gurgaon
On-site
- 8+ years’ experience in Java/J2EE and 2+ years on any Cloud Platform; Bachelor’s in IT, CS, Math, Physics, or related field. - Strong skills in Java, J2EE, REST, SOAP, Web Services, and deploying on servers like WebLogic, WebSphere, Tomcat, JBoss. - Proficient in UI development using JavaScript/TypeScript frameworks such as Angular and React. - Experienced in building scalable business software with core AWS services and engaging with customers on best practices and project management. The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: - Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs - Providing technical guidance and troubleshooting support throughout project delivery - Collaborating with stakeholders to gather requirements and propose effective migration strategies - Acting as a trusted advisor to customers on industry trends and emerging technologies - Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. AWS experience preferred, with proficiency in EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, and AWS Professional certifications (e.g., Solutions Architect, DevOps Engineer). Strong scripting and automation skills (Terraform, Python) and knowledge of security/compliance standards (HIPAA, GDPR). Strong communication skills, able to explain technical concepts to both technical and non-technical audiences. Experience in designing, developing, and deploying scalable business software using AWS services like Lambda, Elastic Beanstalk, and Kubernetes. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 days ago
4.0 - 6.0 years
0 Lacs
Gurgaon
On-site
Locations: Bengaluru | Gurgaon Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a part of BCG's X team, you will work closely with consulting teams on a diverse range of advanced analytics and engineering topics. You will have the opportunity to leverage analytical methodologies to deliver value to BCG's Consulting (case) teams and Practice Areas (domain) through providing analytical and engineering subject matter expertise.As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data pipelines, systems, and solutions that empower our clients to make informed business decisions. You will collaborate closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to deliver high-quality data solutions that meet our clients' needs. YOU'RE GOOD AT Delivering original analysis and insights to case teams, typically owning all or part of an analytics module whilst integrating with a case team. Design, develop, and maintain efficient and robust data pipelines for extracting, transforming, and loading data from various sources to data warehouses, data lakes, and other storage solutions. Building data-intensive solutions that are highly available, scalable, reliable, secure, and cost-effective using programming languages like Python and PySpark. Deep knowledge of Big Data querying and analysis tools, such as PySpark, Hive, Snowflake and Databricks. Broad expertise in at least one Cloud platform like AWS/GCP/Azure.* Working knowledge of automation and deployment tools such as Airflow, Jenkins, GitHub Actions, etc., as well as infrastructure-as-code technologies like Terraform and CloudFormation. Good understanding of DevOps, CI/CD pipelines, orchestration, and containerization tools like Docker and Kubernetes. Basic understanding on Machine Learning methodologies and pipelines. Communicating analytical insights through sophisticated synthesis and packaging of results (including PPT slides and charts) with consultants, collecting, synthesizing, analyzing case team learning & inputs into new best practices and methodologies. Communication Skills: Strong communication skills, enabling effective collaboration with both technical and non-technical team members. Thinking Analytically You should be strong in analytical solutioning with hands on experience in advanced analytics delivery, through the entire life cycle of analytics. Strong analytics skills with the ability to develop and codify knowledge and provide analytical advice where required. What You'll Bring Bachelor's / Master's degree in computer science engineering/technology At least 4-6 years within relevant domain of Data Engineering across industries and work experience providing analytics solutions in a commercial setting. Consulting experience will be considered a plus. Proficient understanding of distributed computing principles including management of Spark clusters, with all included services - various implementations of Spark preferred. Basic hands-on experience with Data engineering tasks like productizing data pipelines, building CI/CD pipeline, code orchestration using tools like Airflow, DevOps etc.Good to have: - Software engineering concepts and best practices, like API design and development, testing frameworks, packaging etc. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge on web development technologies. Understanding of different stages of machine learning system design and development Who You'll Work With You will work with the case team and/or client technical POCs and border X team. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.
Posted 2 days ago
0 years
0 Lacs
India
Remote
About Us Our leading SaaS-based Global Growth Platform™ enables clients to expand into over 180 countries quickly and efficiently, without the complexities of establishing local entities. At G-P, we’re dedicated to breaking down barriers to global business and creating opportunities for everyone, everywhere. Our diverse, remote-first teams are essential to our success. We empower our Dream Team members with flexibility and resources, fostering an environment where innovation thrives and every contribution is valued and celebrated. The work you do here will positively impact lives around the world. We stand by our promise: Opportunity Made Possible. In addition to competitive compensation and benefits, we invite you to join us in expanding your skills and helping to reshape the future of work. At G-P, we assist organizations in building exceptional global teams in days, not months—streamlining the hiring, onboarding, and management process to unlock growth potential for all. About The Role As a Principal AI Engineer, you will design, develop, and deploy AI solutions that address complex business challenges. This role requires advanced expertise in artificial intelligence, including machine learning and natural language processing, and the ability to implement these technologies in production-grade systems. Key Responsibilities Develop innovative, scalable AI solutions for real business problems. Drive the full lifecycle of projects from conception to deployment, ensuring alignment with business objectives. Own highly open-ended projects end-to-end, from the analysis of business requirements to the deployment of solutions. Expect to dedicate about 20% of your time to understanding problems and collaborating with stakeholders. Manage complex data sets, design efficient data processing pipelines, and work on robust models. Expect to spend approximately 80% of your time on data and ML engineering tasks related to developing AI systems. Work closely with other AI engineers, product managers, and stakeholders to ensure that AI solutions meet business needs and enhance user satisfaction. Write clear, concise, and comprehensive technical documentation for all projects and systems developed. Stay updated on the latest developments in the field. Explore and prototype new technologies and approaches to address specific challenges faced by the business. Develop and maintain high-quality machine learning services. Prioritize robust engineering practices and user-centric development. Able to work independently and influence at different levels of the organization. Highly motivated and result driven Required Skills And Qualifications Master’s degree in Computer Science, Machine Learning, Statistics, Engineering, Mathematics, or a related field Deep understanding and practical experience in machine learning and natural language processing especially LLMs Strong foundational knowledge in statistical modeling, probability, and linear algebra Extensive practical experience with curating datasets, training models, analyzing post-deployment data, and developing robust metrics to ensure model reliability Experience developing and maintaining machine learning services for real-world applications at scale Strong Python programming skills High standards for code craftsmanship (maintainable, testable, production-ready code) Proficiency with Docker Knowledge of system design and cloud infrastructure for secure and scalable AI solutions. Proficiency with AWS Proven track record in driving AI projects with strong technical leadership. Excellent communication skills when engaging with both technical and non-technical stakeholders Nice To Have Qualifications Experience with natural language processing for legal applications Proficiency with Terraform React and Node.js experience If you're ready to make an impact in a high-paced startup environment, with a team that embraces innovation and hard work, G-P is the place for you. Be ready to hustle and put in the extra hours when needed to drive our mission forward. We will consider for employment all qualified applicants who meet the inherent requirements for the position. Please note that background checks are required, and this may include criminal record checks. G-P. Global Made Possible. G-P is a proud Equal Opportunity Employer, and we are committed to building and maintaining a diverse, equitable and inclusive culture that celebrates authenticity. We prohibit discrimination and harassment against employees or applicants on the basis of race, color, creed, religion, national origin, ancestry, citizenship status, age, sex or gender (including pregnancy, childbirth, and pregnancy-related conditions), gender identity or expression (including transgender status), sexual orientation, marital status, military service and veteran status, physical or mental disability, genetic information, or any other legally protected status. G-P also is committed to providing reasonable accommodations to individuals with disabilities. If you need an accommodation due to a disability during the interview process, please contact us at careers@g-p.com. Show more Show less
Posted 2 days ago
8.0 years
20 - 28 Lacs
Gurgaon
On-site
Job Title: DevOps Engineer Location: Gurgaon (Work Form Office) Job Type: Full Time Role Experience Level: 8-12 Years Job Summary: We are looking for a skilled and proactive DevOps Engineer to join our technology team. The ideal candidate will be responsible for managing the infrastructure, automating workflows, and ensuring smooth deployment and integration of code across various environments. You will work closely with developers, QA teams, and system administrators to improve CI/CD pipelines, scalability, reliability, and security. Key Responsibilities: Design, build, and maintain efficient CI/CD pipelines (e.g., Jenkins, GitLab CI, GitHub Actions). Automate provisioning, deployment, monitoring, and scaling of infrastructure. Manage and monitor cloud services (AWS, Azure, GCP) and on-premises environments. Configure and manage container orchestration (Docker, Kubernetes). Implement infrastructure as code using tools like Terraform, CloudFormation, or Ansible. Ensure high availability, performance, and security of production systems. Monitor logs, metrics, and application performance; implement alerting and incident response. Collaborate with development and QA teams to streamline release processes. Required Skills and Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. Proven experience in a DevOps or Systems Engineering role. Proficiency with Linux-based infrastructure. Hands-on experience with at least one major cloud provider (AWS, Azure, or GCP). Strong scripting skills (Bash, Python, PowerShell, etc.). Experience with configuration management and IaC tools (e.g., Terraform, Ansible). Familiarity with containerization and orchestration tools (Docker, Kubernetes). Understanding of networking, security, DNS, load balancing, and firewalls. Preferred Qualifications: Certification in AWS, Azure, or GCP. Experience with monitoring tools like Prometheus, Grafana, ELK Stack, Datadog, etc. Exposure to Agile/Scrum methodologies. Knowledge of security best practices in DevOps environments. Job Type: Full-time Pay: ₹2,000,000.00 - ₹2,800,000.00 per year Work Location: In person Speak with the employer +91 9319571799
Posted 2 days ago
3.0 - 7.0 years
5 - 10 Lacs
Pune
Work from Office
This role is for Engineer who is responsible for design, development, and unit testing software applications. The candidate is expected to ensure good quality, maintainable, scalable, and high performing software applications getting delivered to users in an Agile development environment. Candidate / Applicant should be coming from a strong technological background. The candidate should have goo working experience in Python and Spark technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should be able to technically guide and mentor junior resources in the team. As a developer you will bring extensive design and development skills to enforce the group of developers within the team. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. Your key responsibilities Design and discuss your own solution for addressing user stories and tasks. Develop and unit-test, Integrate, deploy, maintain, and improve software. Perform peer code review. Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate / applicant provides. Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master. Your skills and experience Engineer with Good development experience in Big Data platform for at least 5 years. Hands own experience in Spark (Hive, Impala). Hands own experience in Python Programming language. Preferably, experience in BigQuery , Dataproc , Composer , Terraform , GKE , Cloud SQL and Cloud functions. Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of DevOps. Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platforms: OpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members.
Posted 2 days ago
5.0 years
5 - 8 Lacs
Gurgaon
On-site
About the team: The cloud platform teams design, implement, support and improve the cloud technology stack to ensure systems security, reliability & availability. We design, deploy & maintain advanced log analysis & monitoring systems; We lead automated, agile based release management to our 24x7 online application stack; We maintain and develop our ci/cd pipelines & deployment automation tools for our release processes. We implement IaC using Terraform as well as managing more IaaS based infrastructure; We manage and support all PaaS platforms in use in our business! Who we are looking for: We are looking for a highly competent, reliable, self-starting IT generalist with experience in a Windows Server administration or SRE role with web application support. You must have strong infrastructure knowledge with great analysis & problem solving skills - perhaps youve also done some scripting or automation work in a previous role or in a part time capacity? Come talk to us! Responsibilities - Resolve complex technical issues in infrastructure, applications, platforms & backoffice systems Manage and monitor Azure cloud resources, performance, security, and costs using various tools and frameworks Provide third line of support for issues from the front-line incident managers Deploy software releases to our Azure based systems using a squad methodology Be able to clearly think through, communicate & participate in the wider ITS sessions Be part of an on-call rota as needed We are looking for someone who has: 5 years experience supporting Azure cloud infrastructure. 3+ years experience of supporting web application technologies At least 2 years experience using Octopus Deploy Excellent knowledge of Azure technologies and the Azure stack This role requires knowledge of all of the following: TCP/IP, DNS, DHCP, SSL, IIS, Windows Server OS High proficiency in PowerShell & Bash Strong IT admin skills, networking and troubleshooting skills. Excellent verbal & written communications skills A can-do attitude & works with minimal oversight with high standards The ability to prioritise & work in a fast paced, high volume, agile environment. Knowledge of Terraform Knowledge of Hyland Alfresco and HIDP Better if you have: Experience in automating and streamlining a software development lifecycle (SDLC), configuration management etc Experience using Google Cloud Platform Experience working in a regulated financial entity Experience in working with agile methodologies such as Scrum or Kanban Insight: Can we look at staff who have minimum 3 years Azure experience with approx. ~ 5 years experience overall , Octopus Deploy and scripting would also be a bonus.
Posted 2 days ago
4.0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 80366 Date: Jun 16, 2025 Location: Delhi CEC Designation: Consultant Entity: Job Description Location Gurgaon About your role The position is for a Java Development Specialist. The role involves doing development involving core skills of Java (OOPS, Collections, Multi- Threading), SQL, Spring Core, Spring MVC, Hibernate etc.Knowledge of working in Agile Team with DevOps principles would be an additional advantage. This would also involve intensive interaction with the business and other Technology groups, and hence strong communications skills and the ability to work under pressure are absolute must.The candidate is expected to display professional ethics in his/her approach to work and exhibit a high level ownership within a demanding working environment. Essential Skills Minimum 4 years of experience with Webservices & REST APIs Minimum 2 years of experience with cloud – any one of AWS/Azure/Cloudfoundary/Heroku/GCP UML, Design patterns, data structures, clean coding Experience in CI/CD, TDD, DevOps, CI/CD tools - Jenkins/UrbanCode/SonarQube/ Bamboo AWS Lambda, Step Functions, DynamoDB, API Gateway, Cognito, S3, SNS, VPC, IAM, EC2, ECS, etc. Hands on with coding and debugging. Should be able to write high quality code optimized for performance. Good analytical & problem-solving skills and should be good with algorithms. Spring MVC, SpringBoot, Spring Batch, Spring Security. Git, Maven/Gradle. Hibernate (or JPA). Key Responsibilities Work on Java/PaaS applications. Own and deliver technically sound solutions for the ‘Integration Layer’ product. Work and develop on Java/FIL PaaS/AWS applications. Interact with senior architects and other consultants to understand and review the technical solution and direction. Communicate with business analysts to discuss various business requirements. Proactively refactor code/solution, be aggressive about tech debt identification and reduction Develop, maintain, and troubleshoot issues; and take a leading role in the ongoing support and enhancements of the applications. Help in maintaining the standards, procedures, and best practices in the team. Also help his team to follow these standards. Prioritisation of requirements in pipeline with stakeholders. Experience and Qualification: B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 4 to 6 years of experience with application development in Java and related frameworks Skills – nice to have: Spring batch, Spring integration. PL-SQL, Unix IaaC (Infrastructure as code) - Terraform/SAM/Cloudformation JMS, IBM MQ Layer7/Apigee Docker/Kubernetes Microsoft Teams development experience Linux basic
Posted 2 days ago
5.0 - 8.0 years
7 - 17 Lacs
Chennai
Work from Office
Responsibilities Implement and manage cloud infrastructure using Infrastructure as Code (IaC) for compute, storage, network services, and container/Kubernetes management to support high-volume, low-latency CAMS applications. Maintain deep understanding and oversight of all IaC solutions to ensure consistent, repeatable, and secure infrastructure capabilities that can scale on demand. Monitor and manage infrastructure performance to meet service level agreements (SLAs), control costs, and prioritize automation in all deployment processes. Ensure that infrastructure designs and architectures align with technical specifications and business requirements. Provide key support and contribute to the full lifecycle ownership of platform services. Adhere to DevOps principles and participate in end-to-end platform ownership, including occasional incident resolution outside normal hours as part of an on-call rota. Engage in project scoping, requirements analysis, and technical discovery to shape effective infrastructure solutions. Perform performance tuning, monitoring, and maintenance of fault-tolerant, highly available infrastructure to deliver scalable services. Maintain detailed oversight of automation processes and infrastructure security, implementing improvements as necessary. Support continuous improvement by researching alternative approaches and technologies and presenting recommendations for architectural review. Collaborate with teams to contribute to architectural design decisions. Utilize experience with CI/CD pipelines, GitOps, and Kubernetes management to streamline deployment and operations. Work Experience Over 7 years of proven hands-on technical experience. More than 5 years of experience leading and managing cloud infrastructure, including VPC, compute, storage, container services, Kubernetes, and related technologies. Strong Linux system administration skills across CentOS, Ubuntu, and GKE environments, including patching, configuration, and maintenance. Practical expertise with continuous integration tools such as Jenkins and GitLab, along with build automation and dependency management. Proven track record of delivering software releases on schedule. Committed to a collaborative working style and effective team communication, thriving in small, agile teams. Experience designing and implementing zero-downtime deployment solutions in cloud environments. Solid understanding of database and big data technologies, including both SQL and NoSQL systems. #Google Cloud Platform #Terraform #Git #GitOps #Kubernetes #Iac Please share your profiles to divyaa.m@camsonline.com
Posted 2 days ago
5.0 - 8.0 years
0 Lacs
Bhubaneshwar
On-site
Position: Senior Security Engineer (NV58FCT RM 3325) Job Description: 5–8 years of experience in security engineering, preferably with a focus on cloud-based systems. Strong understanding of cloud infrastructure (AWS/GCP/Azure), including IAM, VPC, security groups, key management, etc Hands-on experience with security tools (e.g., AWS Security Hub, Azure Defender, Prisma Cloud, CrowdStrike, Burp Suite, Nessus, or equivalent). Familiarity with containerization and orchestration security (Docker, Kubernetes). Proficient in scripting (Python, Bash, etc.) and infrastructure automation (Terraform, CloudFormation, etc.). In-depth knowledge of encryption, authentication, authorization, and secure communications. Experience interfacing with clients and translating security requirements into actionablesolutions. Preferred Qualifications: Certifications such as CISSP, CISM, CCSP, OSCP, or cloud-specific certs (e.g., AWS Security Specialty). Experience with zero trust architecture and DevSecOps practices. Knowledge of secure mobile or IoT platforms is a plus. Soft Skills: Strong communication and interpersonal skills to engage with clients and internal teams. Analytical mindset with attention to detail and a proactive attitude toward risk mitigation. Ability to prioritize and manage multiple tasks in a fast-paced environment Document architectures, processes, and procedures, ensuring clear communication across the team. ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: BhubaneshwarNoida Experience: 5 - 8 Years Notice period: 0-30 days
Posted 2 days ago
6.0 years
2 - 6 Lacs
Noida
On-site
About Foxit Foxit is a global software company reshaping how the world interacts with documents. With over 700 million users worldwide, we offer cutting-edge PDF, collaboration, and e-signature solutions across desktop, mobile, and cloud platforms. As we expand our SaaS and cloud-native capabilities, we're seeking a technical leader who thrives in distributed environments and can bridge the gap between development and operations at global scale. Role Overview As a Senior Development Support Engineer , you will serve as a key technical liaison between Foxit’s global production environments and our China-based development teams. Your mission is to ensure seamless cross-border collaboration by investigating complex issues, facilitating secure and compliant debugging workflows, and enabling efficient delivery through modern DevOps and cloud infrastructure practices. This is a hands-on, hybrid role requiring deep expertise in application development, cloud operations, and diagnostic tooling. You'll work across production environments to maintain business continuity, support rapid issue resolution, and empower teams working under data access and sovereignty constraints. Key Responsibilities Cross-Border Development Support Investigate complex, high-priority production issues inaccessible to China-based developers. Build sanitized diagnostic packages and test environments to enable effective offshore debugging. Lead root cause analysis for customer-impacting issues across our Java and PHP-based application stack. Document recurring patterns and technical solutions to improve incident response efficiency. Partner closely with China-based developers to maintain architectural alignment and system understanding. Cloud Infrastructure & DevOps Manage containerized workloads (Docker/Kubernetes) in AWS and Azure; optimize performance and cost. Support deployment strategies (blue-green, canary, rolling) and troubleshoot CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). Implement and manage Infrastructure as Code using Terraform (multi-cloud), with CloudFormation or ARM Templates as a plus. Support observability through tools like New Relic, CloudWatch, Azure Monitor, and log aggregation systems. Automate environment provisioning, monitoring, and diagnostics using Python, Bash, and PowerShell. Collaboration & Communication Translate production symptoms into actionable debugging tasks for teams without access to global environments. Work closely with database, QA, and SRE teams to resolve infrastructure or architectural issues. Ensure alignment with global data compliance policies (SOC2, NSD-104, GDPR) when sharing data across borders. Communicate technical issues and resolutions clearly to both technical and non-technical stakeholders. Qualifications Technical Skills Languages: Advanced in Java and PHP (Spring Boot, YII); familiarity with JavaScript a plus. Architecture: Experience designing and optimizing backend microservices and APIs. Cloud Platforms: Hands-on with AWS (EC2, Lambda, RDS) and Azure (VMs, Functions, SQL DB). Containerization: Docker & Kubernetes (EKS/AKS); Helm experience a plus. IaC & Automation: Proficient in Terraform; scripting with Python/Bash. DevOps: Familiar with modern CI/CD pipelines; automated testing (Cypress, Playwright). Databases & Messaging: MySQL, MongoDB, Redis, RabbitMQ. Professional Experience Minimum 6+ years of full-stack or backend development experience in high-concurrency systems. Strong understanding of system design, cloud infrastructure, and global software deployment practices. Experience working in global, distributed engineering teams with data privacy or access restrictions. Preferred Exposure to compliance frameworks (SOC 2, GDPR, NSD-104, ISO 27001, HIPAA). Familiarity with cloud networking, CDN configuration, and cost optimization strategies. Tools experience with Postman, REST Assured, or security testing frameworks. Language: Fluency in English; Mandarin Chinese is a strong plus. Why Foxit? Work at the intersection of development and operations on a global scale. Be a trusted technical enabler for distributed teams facing real-world constraints. Join a high-impact team modernizing cloud infrastructure for enterprise-grade document solutions. Competitive compensation, professional development programs, and a collaborative culture. #LI-Hybrid
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.
These cities are known for their strong tech presence and have a high demand for Terraform professionals.
The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.
In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.
Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.
plan
and apply
commands. (medium)As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.