Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Position Summary: This position is responsible for design and implementation of application platform solutions, with an initial focus on Customer Communication Management (CCM) platforms such as enterprise search and document generation/workflow products such as Quadient , xPression , Documaker , WebSphere Application Server (WAS), and technologies from OpenText. While gaining and providing expertise on these key business platforms, the Engineer will identify opportunities for automation and cloud-enablement across other technologies within the Platform Engineering portfolio and developing cross-functional expertise Job Responsibilities: Provide design and technical support to application developers and operations support staff when required. This includes promoting the use of best practices, ensuring standardization across applications and troubleshooting Design and implement complex integration solutions through collaboration with engineers and application teams across the global enterprise Promote and utilize automation to design and support configuration management, orchestration, and maintenance of the integration platforms using tools such as Perl, Python, and Unix shell Collaborate with senior engineers to understand emerging technologies and their effect on unit cost and service delivery as part of the evolution of the integration technology roadmap Investigate, recommend, implement, and maintain CCM solutions across multiple technologies Investigation of released fix packs, provide well documented instructions and script automation to operations for implementation in collaboration with Senior Engineers in support of platform currency Capacity reviews of current platform Participate in cross-departmental efforts Leads initiatives within the community of practice Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Education: Bachelor's degree in computer science, Information Systems, or related field. Experience: 7+ years of total experience in designing, developing, testing and deploying n-tier applications built on java, python, W ebSphere Application Server, Liberty, Apache Tomcat etc At least 4+ years of experience on Customer Communication Management (CCM) and Document Generation platforms such as Quadient , xPression , Documaker . Linux/Windows OS Apache / HIS IBM WebSphere Application Server, Liberty Quadient , xPression Ansible Shell scripting (Linux, Powershell ) Json/Yaml Ping, SiteMinder Monitoring & Observability (Elastic, AppD , Kibana) Troubleshooting Log & Performance Analysis OpenShift About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role Grade Level (for internal use): 11 The Team: The Infrastructure team is a global team split across the US, Canada and the UK. The team is responsible for building and maintaining platforms used by Index Management teams to calculate and rebalance our high profile indices. The Impact: You will be responsible for the development and expansion of platforms which calculate and rebalance indices for S&P Dow Jones Indices. This will ensure that relevant teams have continuous access to up to date benchmarks and indices. What’s In It For You In this role, you will be a key player in the Infrastructure Engineering team where you will manage the automation of systems administration in the AWS Cloud environment used for running index applications. You will build solutions to automate resource provisioning and administration of infrastructure in AWS Cloud for our index applications. There will also be a smaller element of L3 support for Developers when they have more complex queries to address. Responsibilities Create DevOps pipelines to deliver Infrastructure as Code. Build workflows to create immutable Infrastructure in AWS. Develop automation for provisioning compute instances and storage. Build AMI images using Packer. Develop Ansible playbooks and automate execution of routine Linux scripts. Provision resources in AWS using Cloud Formation Templates. Deploy immutable infrastructure in AWS using Terraform. Orchestrate container deployment Configure Security Groups, Roles & IAM Policy in AWS. Monitor infrastructure and develop utilization reports. Implementing and maintaining version control systems, configuration management tools, and other DevOps-related technologies. Designing and implementing automation tools and frameworks for continuous integration, delivery, and deployment. Develop and write scripts for pipeline automation using relevant scripting languages like Groovy, YAML. Configure continuous delivery workflows for various environments e.g., development, staging, production. Use Jenkins to create pipelines, which are groups of events or jobs that are interlinked with one another in a sequence. Evaluate new AWS services and solutions. Integrate application build & deployments scripts with Jenkins. Troubleshoot Production issues. Effectively interact with global customers, business users and IT employees Basic Qualifications Bachelor's degree in Computer Science, Information Systems or Engineering or equivalent qualification is preferred or relevant equivalent work experience RedHat Linux & AWS Certifications preferred. Strong experience in Infrastructure Engineering and automation. Very good experience in AWS Cloud systems administration. Experience in developing Ansible scripts and Jenkins integration. Expertise using DevOps tools (Jenkins, Terraform, Packer, Ansible, GitHub, Artifactory) Expertise in the different automation tools used to develop CI/CD pipelines. Proficiency in Jenkins and Groovy for creating dynamic and responsive CI/CD pipelines Good experience in RedHat Linux scripting First class communication skills – written, verbal and presenting Preferred Qualifications Candidates should have a minimum of 10+ years industry experience in cloud and Infrastructure. Administer Redhat Linux Operating Systems Deploy OS patches & perform upgrades Configure filesystems & allocate storage Develop Unix scripts Develop scripts for automation of infrastructure provisioning. Monitor infrastructure and develop utilization reports Evaluate new AWS services and solutions Experience working with customers to diagnose a problem, and work toward resolution Excellent verbal and written communication skills Understanding of various Load Balancers in a large data center environment About S&P Global Dow Jones Indices At S&P Dow Jones Indices, we provide iconic and innovative index solutions backed by unparalleled expertise across the asset-class spectrum. By bringing transparency to the global capital markets, we empower investors everywhere to make decisions with conviction. We’re the largest global resource for index-based concepts, data and research, and home to iconic financial market indicators, such as the S&P 500® and the Dow Jones Industrial Average®. More assets are invested in products based upon our indices than any other index provider in the world. With over USD 7.4 trillion in passively managed assets linked to our indices and over USD 11.3 trillion benchmarked to our indices, our solutions are widely considered indispensable in tracking market performance, evaluating portfolios and developing investment strategies. S&P Dow Jones Indices is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/spdji. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Inclusive Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering an inclusive workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and equal opportunity, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 301292 Posted On: 2025-02-26 Location: Mumbai, Maharashtra, India Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Andhra Pradesh, India
On-site
Experience Good experience in API design, development, and implementation 3 years experience of cloud platform services (preferably GCP) Hands-on experience in designing, implementing, and maintaining APIs that meet the highest standards of performance, security, and scalability. Hands-on experience in Design, develop, and implement microservices architectures and solutions using industry best practices and design patterns. Hands-on experience with cloud computing and services. Hands-on experience with proficiency in programming languages like Java, Python, JavaScript etc. Hands-on experience with API Gateway and management tools like Apigee, Kong, API Gateway. Hands-on experience with integrating APIs with a variety of systems/applications/microservices and infrastructure . Deployment experience in Cloud environment (preferably GCP) Experience in TDD/DDD and unit testing. Hands-on CI/CD experience in automating the build, test, and deployment processes to ensure rapid and reliable delivery of API updates. Technical Skills Programming & Languages: Java, GraphQL, SQL, API Gateway and management tools Apigee, API Gateway Database Tech: Oracle, Spanner, BigQuery, Cloud Storage Operating Systems Linux Expert with API design principles, specification and architectural styles like REST, GraphQL, and gRPC, Proficiency in API lifecycle management, advanced security measures, and performance optimization. Good Knowledge of Security Best Practices and Compliance Awareness. Good Knowledge of messaging patterns and distributed systems. Well-versed with protocols and data formats. Strong development knowledge in microservice design, architectural patterns, frameworks and libraries. Knowledge of SQL and NoSQL databases, and how to interact with them through APIs Good to have knowledge of data modeling and database management design database schemas that efficiently store and retrieve data. Scripting and configuration (eg yaml) knowledge. Strong Testing and Debugging Skills writing unit tests and familiarity with the tools and techniques to fix issues. DevOps knowledge CI/CD practices and tools. Familiarity with Monitoring and observability platforms for real-time insights into application performance Understanding version control systems like Git. Familiarity with API documentation standards such as OpenAPI. Problem-solving skills and ability to work independently in a fast-paced environment. Effective Communication negotiate and communicate effectively with stakeholders to ensure API solutions meet both technical and non-technical stakeholders. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Strong DevOps knowledge, hands-on experience in CI/CD (Any CI/CD tools). •Extensive experience in Linux system administration, Networking and troubleshooting. - Experience in shell, YAML, JSON, groovy scripting. • Strong experience in AWS (EC2, S3, VPC, RDS, IAM, Organisation, Identity Center Etc.,) •Ability to setup CI/CD pipeline using AWS services or other CI/CD tools •Experience in configuring and troubleshooting on EKS & ECS and deploying applications on an EKS & ECS cluster. •Strong hands-on knowledge in Terraform and/or AWS CFT. Must be able to automate AWS infrastructure provisioning using Terraform/CFT in most efficient way. •Experience in Cloudwatch / Cloudtrail / Prometheus-Grafana for infra/application monitoring. - Most importantly, must have great soft skills, critical & analytical thinking in a larger scope, ability quickly, efficiently understand, identify and solve problems. Show more Show less
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Organization :- At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title :- Associate/Graduate Engineer Location : - Bangalore Business & Team :- Group Security Technology’s purpose is safeguarding a brighter future for all through innovative technology. We do this by designing, building and running security products across cyber, identity and fraud technology in order to fulfil our objectives: Improve velocity of security outcomes with great customer experiences. Innovate, digitise and disrupt the way we deliver security. Lead the way in secure cloud and control adoption. Assure safe, sound, and secure technology. Our Edge Security Technology squad builds, runs and enhances the edge security products and services that protect the CBA Group's internet facing websites, web applications, APIs and perimeter infrastructure from distributed denial of service, web application & API cyberattacks. Impact & contribution :- The Sr Platform Engineer will be working with their chapter lead, squad lead and members to drive product roadmap initiatives, support and accelerate business unit engagements and contribute to operational enhancement and BAU activities. Roles & Responsibilities :- Write software and tooling that automates the operations of our platforms, infrastructure, environments, and tooling. Create a standardised set of tooling for deploying and running applications and setting them up with best practices. Maintain the underlying infrastructure to ensure the that it is reliable, secure and scalable. Make all platforms entirely self-service, secure, and available within minutes without human approval. Ensure that our platforms are loved by our software engineers, and continually evolve our platforms to embrace new technology and improve the happiness and efficiency of our software engineers. Write software to ensure that all deployment and operations is as automated as possible, using languages such as python and go, and automate implementation of controls into our platforms. Participate in cross-group activities to build a culture of one team, bar-raising both our engineering capability and our technology solutions to drive our strategy. Essential Skills :- Experience;-3 TO 5 Years. Experience with Data Security Posture Management Tool i.e. BigID, Cyera,Varonis Experience as Software Engineer/DevSecOps/DevOps/System Engineer in Cloud environment. Expertise working with AWS Infrastructure essential. Experience with container technology: Docker and Kubernetes/EKS. Strong experience with scripting/programming in programming languages such as PowerShell, Java, Bash, Python, YAML etc. Experience working with Infrastructure as Code Cloudformation, Terraform, CDK, AWS CodeBuild/CodePipeline highly regarded. Experience working with CI/CD & automation tools like Github, Github Actions, Teamcity, Jenkins, CI/CD Pipeline. Experience with Logging and Monitoring tools like Observe/Splunk, Prometheus, Grafana and Pagerduty Systematic problem-solving approach and proactively continue improving current processes and tools. Be able to communicate ideas clearly and effectively. Education Qualification :- Bachelor’s degree or Master’s degree in Engineering in Computer Science/Information Technology If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 30/05/2025 Show more Show less
Posted 2 weeks ago
4.0 - 6.0 years
3 - 6 Lacs
Chennai
Work from Office
4+ years of experience in development of automation test cases preferable in a SaaS environment. Strong hands-on experience with Cypress, Selenium or RestAssured for automation frameworks. Superior Javascript skills required Experience working directly with JMeter, LoadRunner, Karate API, YAML, Postman Superior knowledge of agile best practices, continuous testing in a CI/CD environment such as GCP DevOps Experience with SCM tools like GitHub, JIRA, familiarity with SQL P&C Insurance industry experience required Bachelors Degree in Computer Science, Computer Engineering, or a related field
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Area(s) of responsibility Experience: 3 to 5 Years Tools & Technologies: - Site Reliability Engineering, AWS, PCF, DevOps, Tomcat 9, Nginx Webserver, and Unix OS (Linux), Splunk, New Relic, Load balancer etc.., Putty, WINSCP, Shell Scripting, Stone Branch. Job Responsibilities Diagnose and resolve technical issues with AWS services, applications, and infrastructure, often involving deep dives into logs and system analysis. Resolving technical issues reported by users or detected through monitoring tools. Assist with the deployment and configuring/setup of cloud resources (e.g., EC2, S3, VPC), including virtual machines, storage, and networking, ensuring proper configuration and integration. Monitor cloud infrastructure performance metrics, identify bottlenecks, and recommend solutions for improved performance and cost-efficiency. Develop and implement automation scripts and tools to streamline support processes and automate infrastructure provisioning. Experience in CI/CD pipelines for code deployment using (CloudFormation, Ansible Git, Maven, SonarQube, Gradle, Nexus, Bitbucket, UDeployer, Fortify, DevOps components) with YAML and Classic Editor. Having Pipeline scripting knowledge like Groovy, Shell scripts, Python. Writing shell Scripts and adding to Cron job to automate daily maintenance tasks like log rotation, report transfer, Server Rolling bounce, file system cleanups and alerts etc. Primary Skills That Must Be For The Candidate Good knowledge on Migration tasks, including planning, execution and documentation. Must have experience on cloud technologies like PCF/AWS. Managing and resolving incidents, including escalating issues as needed and Analysing system performance data and logs to identify trends and forecast needs. Setting up a High Availability Environments using external Hardware F5 load balancer, Nginx Web servers, http Session replications and Clustering of Weblogic Server & Services like JDBC, JMS etc. Skills with M/O flag are part of Specialization Capacity Management -PL2 (Functional) Win the Customer -PL2 (Behavioural) One Birlasoft -PL2 (Behavioural) Results Matter -PL2 (Behavioural) Get Future Ready -PL2 (Behavioural) Availability Management -PL3 (Functional) Service Level Management -PL2 (Functional) Incident Management -PL3 (Functional) IT Infrastructure -PL3 (Functional) Help the tribe -PL2 (Behavioural) Think Holistically -PL2 (Behavioural) GCP-Administration - PL3 (Mandatory) GCP-DevOps - PL2 (Optional) GCP-IaC - PL3 (Mandatory) Linux administration - PL2 (Optional) Wintel Administration - PL2 (Optional) Show more Show less
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
Kochi, Kerala
On-site
SOC ENGINEER (ENGINEER R&D / DEV) We are looking for a candidate who have experience in as DevOps engineer to creating systems software and analyzing data to improve existing systems or New innovation, along with develop and maintain scalable applications Monitor, troubleshoot, and resolve issues including deployments in multiple environments. Candidate must be well-versed in computer systems and network functions. They should be able to work diligently and accurately and should have great problem-solving ability in order to fix issues and ensure client’s business functionalities. REQUIREMENTS: ELK development experience Dev or DevOps experience on AWS cloud, containers, serverless code Development stack of Wazuh and ELK. Implement best DevOps practice Tool set knowledge required for parser/ use case development, plugin customisation – Regex, python, yaml, xml . Hands-on experience in DevOps . Experience with Linux and monitoring, logging tools such as Splunk ,Strong scripting skills Researching and designing new software systems, websites, programs, and applications. Writing and implementing, clean, scalable code. Troubleshooting and debugging code. Verifying and deploying software systems. Evaluating user feedback. Recommending and executing program improvements. Maintaining software code and security systems. Knowledge of cloud system(AWS, Azure). Excellent communication skills. GOOD TO HAVE: SOC, security domain experience is desirable. Knowledge of Docker, Machine Learning, BigData, Data Analysis, Web-Scrapping.ata Analysis, Web-Scrapping. Resourcefulness and problem-solving aptitude Good understanding of SIEM solutions like ELK, Splunk, ArcSight etc. Understanding of cloud platforms like Amazon AWS, Microsoft Azure and Google Cloud. Experience in managing firewall / UTM solutions from Sophos, Fortigate, Palo Alto, Cisco FirePower Professional certification (e.g. Linux Foundation Certified System Administrator, Linux+ CompTIA,RHCSA – Red Hat Certified System Administrator) QUALIFICATION: 2-3 years of experience in Product //DevOps//SecOps//development. SKILLS: Experience in software design and development using API infrastructure. Profound knowledge in various scripting languages, system, and server administration Exceptional organizing and time-management skills Very good communication abilities ELK, Wazuh, Splunk, ArcSight SIEM management skills Reporting Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹66,000.00 per month Benefits: Internet reimbursement Schedule: Day shift Supplemental Pay: Performance bonus Application Question(s): Do you have experience in SIEM Tool, Scripting, Backend or Front end development? Experience: minimum: 1 year (Required) Language: English (Required) Location: Kochi, Kerala (Required) Work Location: In person
Posted 2 weeks ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, develop, and maintain backend services and APIs using Java and Python Write clean, efficient, and scalable code following best practices Build microservices architecture and integrate services effectively Create unit tests and debug issues to ensure application reliability Deploy and manage containerized applications using Kubernetes Create and maintain Kubernetes manifests (YAML files) for pods, services, and ingresses Implement auto-scaling, load balancing, and fault-tolerant systems in Kubernetes clusters Monitor and optimize Kubernetes clusters using tools like Prometheus and Grafana Build and maintain CI/CD pipelines using GitHub Actions for automated testing, building, and deployment Write custom workflows in YAML to streamline development and deployment tasks Secure pipelines using GitHub Secrets and troubleshoot workflow issues Collaborate with cross-functional teams, including QA, DevOps, and product managers, to deliver high-quality software Deploy applications to cloud platforms (AWS, GCP, Azure) and manage Kubernetes integrations Mentor junior engineers and contribute to improving team processes and technical standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate degree or equivalent experience 1+ years of experience Skills: Java, Python, API Development, Rest services, Kubernetes, docker, Jenkins, Github, SQL, MongoDB At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Greetings from LTIMindtree!!! We're looking for DevOps Engineer! Experience: 3 Yrs to 5 Yrs Location: Pune/Mumbai/Chennai/Hyderabad/Bangalore/Kolkata Overall 3-5 years of IT experience with minimum of 5+ Yrs of experience as Azure DevOps Engineer JD: DevOps (CI/CD), Kubernetes, Terraform, (YAML or Helm Charts) and any scripting (Powershell or Python or Shell or groovy) Interested candidates share the updated profile to madhuvanthi.s@ltimindtree.com Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Greetings from LTIMindtree!!! We're looking for DevOps Engineer! Experience: 3 Yrs to 5 Yrs Location: Pune/Mumbai/Chennai/Hyderabad/Bangalore/Kolkata Overall 3-5 years of IT experience with minimum of 5+ Yrs of experience as Azure DevOps Engineer JD: DevOps (CI/CD), Kubernetes, Terraform, (YAML or Helm Charts) and any scripting (Powershell or Python or Shell or groovy) Interested candidates share the updated profile to madhuvanthi.s@ltimindtree.com Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Java Backend Developer Location: Hyderabad (On-site) Experience: Minimum 5 Years Notice Period: Immediate Joiners Only Primary Skills: Core Java, Spring Boot, Microservices, REST APIs, Kubernetes Secondary Skills: Docker, OpenShift, YAML.. Media or Telecom domain exp is mandatory.. Key Responsibilities: Design, develop, and maintain scalable backend applications using Core Java and Spring Boot. Build and manage RESTful APIs and microservices architecture. Deploy and manage services using Kubernetes, with exposure to OpenShift and Docker. Write efficient, reusable, and testable code in a fast-paced development environment. Collaborate closely with cross-functional teams including DevOps, Product Owners, and QA teams. Ensure performance, scalability, and security in all backend implementations. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Overview As an LLM (Large Language Model) Engineer, you will be responsible for designing, optimizing, and standardizing the architecture, codebase, and deployment pipelines of LLM-based systems. Your primary mission will focus on modernizing legacy machine learning codebases (including 40+ models) for a major retail clientenabling consistency, modularity, observability, and readiness for GenAI-driven innovation. Youll work at the intersection of ML, software engineering, and MLOps to enable seamless experimentation, robust infrastructure, and production-grade performance for language-driven systems. This role requires deep expertise in NLP, transformer-based models, and the evolving ecosystem of LLM operations (LLMOps), along with a hands-on approach to debugging, refactoring, and building unified frameworks for scalable GenAI : Lead the standardization and modernization of legacy ML codebases by aligning to current LLM architecture best practices. Re-architect code for 40+ legacy ML models, ensuring modularity, documentation, and consistent design patterns. Design and maintain pipelines for fine-tuning, evaluation, and inference of LLMs using Hugging Face, OpenAI, or open-source stacks (e.g., LLaMA, Mistral, Falcon). Build frameworks to operationalize prompt engineering, retrieval-augmented generation (RAG), and few-shot/in-context learning methods. Collaborate with Data Scientists, MLOps Engineers, and Platform teams to implement scalable CI/CD pipelines, feature stores, model registries, and unified experiment tracking. Benchmark model performance, latency, and cost across multiple deployment environments (on-premise, GCP, Azure). Develop governance, access control, and audit logging mechanisms for LLM outputs to ensure data safety and compliance. Mentor engineering teams in code best practices, versioning, and LLM lifecycle Skills : Deep understanding of transformer architectures, tokenization, attention mechanisms, and training/inference optimization Proven track record in standardizing ML systems using OOP design, reusable components, and scalable service APIs Hands-on experience with MLflow, LangChain, Ray, Prefect/Airflow, Docker, K8s, Weights & Biases, and model-serving platforms. Strong grasp of prompt tuning, evaluation metrics, context window management, and hybrid search strategies using vector databases like FAISS, pgvector, or Milvus Proficient in Python (must), with working knowledge of shell scripting, YAML, and JSON schema standardization Experience managing compute, memory, and storage requirements of LLMs across & Experience : 5+ years in ML/AI engineering with at least 2 years working on LLMs or NLP-heavy systems. Able to reverse-engineer undocumented code and reimagine it with strong documentation and testing in mind. Clear communicator who collaborates well with business, data science, and DevOps teams. Familiar with agile processes, JIRA, GitOps, and confluence-based knowledge sharing. Curious and future-facingalways exploring new techniques and pushing the envelope on GenAI innovation. Passionate about data ethics, responsible AI, and building inclusive systems that scale (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description And Requirements Position Summary This position is responsible for design and implementation of application platform solutions, with an initial focus on Enterprise Content Management (ECM) platforms such as enterprise search and document generation/workflow products such as IBM FileNet / BAW, WebSphere Application Server (WAS), and technologies from OpenText. While gaining and providing expertise on these key business platforms, the Engineer will identify opportunities for automation and cloud-enablement across other technologies within the Platform Engineering portfolio and developing cross-functional expertise Job Responsibilities Provide design and technical support to application developers and operations support staff when required. This includes promoting the use of best practices, ensuring standardization across applications and troubleshooting Design and implement complex integration solutions through collaboration with engineers and application teams across the global enterprise Promote and utilize automation to design and support configuration management, orchestration, and maintenance of the integration platforms using tools such as Perl, Python, and Unix shell Collaborate with senior engineers to understand emerging technologies and their effect on unit cost and service delivery as part of the evolution of the integration technology roadmap Investigate, recommend, implement, and maintain ECM solutions across multiple technologies Investigation of released fix packs, provide well documented instructions and script automation to operations for implementation in collaboration with Senior Engineers in support of platform currency Capacity reviews of current platform Participate in cross-departmental efforts Leads initiatives within the community of practice Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor’s Degree in Computer Science, Information Systems, or related field. Experience 7+ years of total experience and at least 4+ years of experience in design and implementation of application platform solutions on Enterprise Content Management (ECM) platforms such as enterprise search, document generation/workflow products such as IBM FileNet / BAW, WebSphere Application Server (WAS) Promote and utilize automation to design and support configuration management, orchestration, and maintenance of the integration platforms using tools such as Perl, Python, and Unix shell Apache / HIS Linux/Windows OS Communication Json/Yaml Shell scripting Integration of authentication and authorization methods Web to jvm communications SSL/TLS protocols/cipher suites and certificates/keystores FileNet/BAW install, configure, administer Liberty administration Troubleshooting Integration with database technologies Integration with middleware technologies Good to Have: Ansible Python OpenShift AZDO Pipelines Other Requirements (licenses, Certifications, Specialized Training – If Required) Working Relationships Internal Contacts (and purpose of relationship): MetLife internal partners External Contacts (and purpose of relationship) – If Applicable MetLife external partners About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 2 weeks ago
3.0 years
50 - 60 Lacs
Gurugram, Haryana, India
Remote
Experience : 3.00 + years Salary : INR 5000000-6000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Rill Data) (*Note: This is a requirement for one of Uplers' client - Rill Data) What do you need for this opportunity? Must have skills required: DBT, Iceberg, Kestra, Parquet, SQLGlot, ClickHouse, DuckDB, AWS, Python, SQL Rill Data is Looking for: Rill is the world’s fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution that’s easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelor’s degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Onboard clients via the components of our data engineering pipeline, which consists of UIs, Azure Databricks, Azure ServiceBus, Apache Airflow, and various container-based services configured through IUs, SQL, PL/SQL, Python, YAML, Node, and Shell, with code managed in GitHub, deployed through Jenkins, and monitored through Prometheus, Grafana Work as a part of our client implementation team to ensure the highest standards of product configuration that meet client requirements Test and troubleshoot data pipeline using sample and live client data. Utilize Jenkins, Python, Groovy Scripts and Java to automate these tests. Must be able to parse logs to determine next actions. Work with product teams to ensure the product is configured appropriately Utilize dashboards for Kubernetes/OpenShift to diagnose high level issues and ensure services are healthy Support Implementation immediately after go live, work with O&M team to transition support to that team Participate in daily AGILE meetings Estimate project deliverables Configure and test REST APIs and utilize manual tools to interact with API’s Work with data providers to clarify requirements and remove roadblocks Drive automation into everyday activities Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 3+ years of experience working with SQL (preferably Oracle Pl/SQL and SparkSQL) and data at scale 3+ years of ETL experience ensuring source to target data integrity. Familiar with various file types (Delimited Text, Fixed Width, XML, JSON, Parque) 1+ years of coding experience with one or more of the follow languages; Java, C#, Python, NodeJS using Git, with practical experience with working collaboratively through Git branching strategies 1+ years of experience with Microsoft Azure cloud infrastructure, DataBricks, DataFactory, DataLake, Airflow and Cosmos Database 1+ years of experience in reading and configuring YAML 1+ years of experience with ServiceBus, setting up ingress and egress within a subscription, or relevant Azure Cloud services administrative experience 1+ years of experience with Unit Testing, Code Quality tools, CI/CD Technologies, Security and Container Technologies 1+ years of Agile development experience and knowledge of Agile ceremonies and practices At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. Show more Show less
Posted 2 weeks ago
3.0 years
50 - 60 Lacs
Kolkata, West Bengal, India
Remote
Experience : 3.00 + years Salary : INR 5000000-6000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Rill Data) (*Note: This is a requirement for one of Uplers' client - Rill Data) What do you need for this opportunity? Must have skills required: DBT, Iceberg, Kestra, Parquet, SQLGlot, ClickHouse, DuckDB, AWS, Python, SQL Rill Data is Looking for: Rill is the world’s fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution that’s easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelor’s degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
3.0 years
50 - 60 Lacs
Cuttack, Odisha, India
Remote
Experience : 3.00 + years Salary : INR 5000000-6000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Rill Data) (*Note: This is a requirement for one of Uplers' client - Rill Data) What do you need for this opportunity? Must have skills required: DBT, Iceberg, Kestra, Parquet, SQLGlot, ClickHouse, DuckDB, AWS, Python, SQL Rill Data is Looking for: Rill is the world’s fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution that’s easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelor’s degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
3.0 years
50 - 60 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 3.00 + years Salary : INR 5000000-6000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Rill Data) (*Note: This is a requirement for one of Uplers' client - Rill Data) What do you need for this opportunity? Must have skills required: DBT, Iceberg, Kestra, Parquet, SQLGlot, ClickHouse, DuckDB, AWS, Python, SQL Rill Data is Looking for: Rill is the world’s fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution that’s easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelor’s degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
3.0 years
50 - 60 Lacs
Guwahati, Assam, India
Remote
Experience : 3.00 + years Salary : INR 5000000-6000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Rill Data) (*Note: This is a requirement for one of Uplers' client - Rill Data) What do you need for this opportunity? Must have skills required: DBT, Iceberg, Kestra, Parquet, SQLGlot, ClickHouse, DuckDB, AWS, Python, SQL Rill Data is Looking for: Rill is the world’s fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution that’s easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelor’s degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
3.0 years
50 - 60 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 3.00 + years Salary : INR 5000000-6000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Rill Data) (*Note: This is a requirement for one of Uplers' client - Rill Data) What do you need for this opportunity? Must have skills required: DBT, Iceberg, Kestra, Parquet, SQLGlot, ClickHouse, DuckDB, AWS, Python, SQL Rill Data is Looking for: Rill is the world’s fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution that’s easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelor’s degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
3.0 years
50 - 60 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 3.00 + years Salary : INR 5000000-6000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Rill Data) (*Note: This is a requirement for one of Uplers' client - Rill Data) What do you need for this opportunity? Must have skills required: DBT, Iceberg, Kestra, Parquet, SQLGlot, ClickHouse, DuckDB, AWS, Python, SQL Rill Data is Looking for: Rill is the world’s fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution that’s easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelor’s degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
3.0 years
50 - 60 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 3.00 + years Salary : INR 5000000-6000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Rill Data) (*Note: This is a requirement for one of Uplers' client - Rill Data) What do you need for this opportunity? Must have skills required: DBT, Iceberg, Kestra, Parquet, SQLGlot, ClickHouse, DuckDB, AWS, Python, SQL Rill Data is Looking for: Rill is the world’s fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution that’s easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelor’s degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
3.0 years
50 - 60 Lacs
Amritsar, Punjab, India
Remote
Experience : 3.00 + years Salary : INR 5000000-6000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Rill Data) (*Note: This is a requirement for one of Uplers' client - Rill Data) What do you need for this opportunity? Must have skills required: DBT, Iceberg, Kestra, Parquet, SQLGlot, ClickHouse, DuckDB, AWS, Python, SQL Rill Data is Looking for: Rill is the world’s fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution that’s easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelor’s degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
3.0 years
50 - 60 Lacs
Surat, Gujarat, India
Remote
Experience : 3.00 + years Salary : INR 5000000-6000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Rill Data) (*Note: This is a requirement for one of Uplers' client - Rill Data) What do you need for this opportunity? Must have skills required: DBT, Iceberg, Kestra, Parquet, SQLGlot, ClickHouse, DuckDB, AWS, Python, SQL Rill Data is Looking for: Rill is the world’s fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution that’s easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelor’s degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
YAML (YAML Ain't Markup Language) has seen a surge in demand in the job market in India. Organizations are increasingly looking for professionals who are proficient in YAML to manage configuration files, create data structures, and more. If you are a job seeker interested in YAML roles in India, this article provides valuable insights to help you navigate the job market effectively.
These cities are known for their vibrant tech scenes and have a high demand for YAML professionals.
The average salary range for YAML professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 10 lakhs per annum.
In the YAML skill area, a typical career path may involve starting as a Junior Developer, progressing to a Senior Developer, and eventually becoming a Tech Lead. Continuous learning and gaining hands-on experience with YAML will be crucial for career advancement.
Apart from YAML proficiency, other skills that are often expected or helpful alongside YAML include: - Proficiency in scripting languages like Python or Ruby - Experience with version control systems like Git - Knowledge of containerization technologies like Docker - Understanding of CI/CD pipelines
Here are 25 interview questions for YAML roles:
- What is YAML and what are its advantages? (basic)
- Explain the difference between YAML and JSON. (basic)
- How can you include one YAML file in another? (medium)
- What is a YAML anchor? (medium)
- How can you create a multi-line string in YAML? (basic)
- Explain the difference between a sequence and a mapping in YAML. (medium)
- What is the difference between !=
and !==
in YAML? (advanced)
- Provide an example of using YAML in a Kubernetes manifest file. (medium)
- How can you comment in YAML? (basic)
- What is a YAML alias and how is it used? (medium)
- Explain how to define a list in YAML. (basic)
- What is a YAML tag? (medium)
- How can you handle sensitive data in a YAML file? (medium)
- Explain the concept of anchors and references in YAML. (medium)
- How can you represent a null value in YAML? (basic)
- What is the significance of the ---
at the beginning of a YAML file? (basic)
- How can you represent a boolean value in YAML? (basic)
- Explain the concept of scalars, sequences, and mappings in YAML. (medium)
- How can you create a complex data structure in YAML? (medium)
- What is the difference between <<
and &
in YAML? (advanced)
- Provide an example of using YAML in an Ansible playbook. (medium)
- Explain what YAML anchors and aliases are used for. (medium)
- How can you control the indentation in a YAML file? (basic)
- What is a YAML directive? (advanced)
- How can you represent special characters in a YAML file? (medium)
As you prepare for YAML job roles in India, remember to showcase your proficiency in YAML and related skills during interviews. Stay updated with the latest industry trends and continue to enhance your YAML expertise. With the right preparation and confidence, you can excel in the competitive job market for YAML professionals in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2