Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
15 - 25 Lacs
Gurugram
Remote
Minimum 6 years of hands-on experience deploying, enhancing, and troubleshooting foundational AWS Services (EC2, S3, RDS, VPC, CloudTrail, CloudFront, Lambda, EKS, ECS, etc.) • 3+ years of experience with serverless technologies, services, and container technologies (Docker, Kubernetes, etc.) o Manage Kubernetes charts using helm. o Managed production application deployments in Kubernetes cluster using KubeCTL. o Expertise in deploying distributed apps with containers (Docker) & orchestration (Kubernetes EKS,). o Experience in infrastructure-as-code tools for provisioning and managing Kubernetes infrastructure. o (Preferred) Certification in container orchestration systems and/or Certified Kubernetes Administrator. o Experience with Log Management and Analytics tools such as Splunk / ELK • 3+ years of experience with writing, debugging, and enhancing Terraform to write infrastructure as code to create scrips for EKS, EC2, S3, and other AWS services. o Expertise with working with Terraform Key features such as Infrastructure as code, execution plans, resource graphs, and change automation. o Implemented cluster services using Kubernetes and docker to manage local deployments in Kubernetes by building self-hosted Kubernetes clusters using Terraform. o Managed provisioning of AWS infrastructure using Terraform. o Develop and maintain infrastructure-as-code solutions using Terraform. • Ability to write scripts in JavaScript, Bash, Python, Typescript, or similar languages. • Able to work independently and as a team to architect and implement new solutions and technologies. • Very strong written and verbal communication skills; the ability to communicate verbally and in writing with all levels of employees and management, capable of successful formal and informal communication, speaks and writes clearly and understandably at the right level. • Ability to identify, evaluate, learn, and POC new technologies for implementation. • Experience in designing and implementing highly resilient AWS solutions.
Posted 1 month ago
3.0 - 5.0 years
1 - 3 Lacs
Chennai
Work from Office
**AWS Infrastructure Management:** Design, implement, and maintain scalable, secure cloud infrastructure using AWS services (EC2, Lambda, S3, RDS, Cloud Formation/Terraform, etc.) Monitor and optimize cloud resource usage and costs **CI/CD Pipeline Automation:** Set up and maintain robust CI/CD pipelines using tools such as GitHub Actions, GitLab CI, Jenkins, or AWS Code Pipeline Ensure smooth deployment processes for staging and production environments **Git Workflow Management:** Implement and enforce best practices for version control and branching strategies (Gitflow, trunk-based development, etc.) Support development teams in resolving Git issues and improving workflows **Twilio Integration & Support:** Manage and maintain Twilio-based communication systems (SMS, Voice, WhatsApp, Programmable Messaging) Develop and deploy Twilio Functions and Studio Flows for customer engagement Monitor communication systems and troubleshoot delivery or quality issues **Infrastructure as Code & Automation:** Use tools like Terraform, Cloud Formation, or Pulumi for reproducible infrastructure Create scripts and automation tools to streamline routine DevOps tasks **Monitoring, Logging & Security:** Implement and maintain monitoring/logging tools (Cloud Watch, Datadog, ELK, etc.) Ensure adherence to best practices around IAM, secrets management, and compliance **Requirements** 3-5+ years of experience in DevOps or a similar role Expert-level experience with **Amazon Web Services (AWS)** Strong command of **Git** and Git-based CI/CD practices Experience building and supporting solutions using **Twilio APIs** (SMS, Voice, Programmable Messaging, etc.) Proficiency in scripting languages (Bash, Python, etc.) Hands-on experience with containerization (Docker) and orchestration tools (ECS, EKS, Kubernetes) Familiarity with Agile/Scrum workflows and collaborative development environments **Preferred Qualifications** AWS Certifications (e.g., Solutions Architect, DevOps Engineer) Experience with serverless frameworks and event-driven architectures Previous work with other communication platforms (e.g., SendGrid, Nexmo) a plus Knowledge of RESTful API development and integration Experience working in high-availability, production-grade systems
Posted 1 month ago
5.0 - 10.0 years
7 - 17 Lacs
Hyderabad, Pune, Chennai
Work from Office
Airflow Data Engineer in AWS platform Job Title Apache Airflow Data Engineer ROLE” as per TCS Role Master • 4-8 years of experience in AWS, Apache Airflow (on Astronomer platform), Python, Pyspark, SQL • Good hands-on knowledge on SQL and Data Warehousing life cycle is an absolute requirement. • Experience in creating data pipelines and orchestrating using Apache Airflow • Significant experience with data migrations and development of Operational Data Stores, Enterprise Data Warehouses, Data Lake and Data Marts. • Good to have: Experience with cloud ETL and ELT in one of the tools like DBT/Glue/EMR or Matillion or any other ELT tool • Excellent communication skills to liaise with Business & IT stakeholders. • Expertise in planning execution of a project and efforts estimation. • Exposure to working in Agile ways of working. Candidate for this position to be offered with TAIC or TCSL as Entity Data warehousing , pyspark , Github, AWS data platform, Glue, EMR, RedShift, databricks,Data Marts. DBT/Glue/EMR or Matillion, data engineering, data modelling, data consumption
Posted 1 month ago
6.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
We are looking for a skilled and proactive AWS Operational Support Analyst to join our cloud infrastructure team. The ideal candidate will be responsible for monitoring, maintaining, and improving the performance, security, and reliability of AWS-hosted environments. This role is essential in ensuring uninterrupted cloud operations and supporting DevOps, development, and business teams with cloud-related issues. Key Responsibilities Monitor AWS cloud infrastructure for performance, availability, and operational issues. Manage incident response, root cause analysis, and resolution of infrastructure-related issues. Execute daily operational tasks including backups, system patching, and performance tuning. Collaborate with DevOps and engineering teams to ensure smooth CI/CD operations. Maintain system documentation and support knowledge base. Automate routine tasks using shell scripts or AWS tools (e.g., Lambda, Systems Manager). Manage AWS services such as EC2, RDS, S3, CloudWatch, IAM, and VPC. Implement cloud cost-optimization practices and security compliance controls. Perform health checks, generate reports, and suggest performance improvements.
Posted 1 month ago
8.0 - 13.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Qualification & Experience: Minimum of 8 years of experience as a Data Scientist/Engineer with demonstrated expertise in data engineering and cloud computing technologies. Technical Responsibilities Excellent proficiency in Python, with a strong focus on developing advanced skills. Extensive exposure to NLP and image processing concepts. Proficient in version control systems like Git. In-depth understanding of Azure deployments. Expertise in OCR, ML model training, and transfer learning. Experience working with unstructured data formats such as PDFs, DOCX, and images. O Strong familiarity with data science best practices and the ML lifecycle. Strong experience with data pipeline development, ETL processes, and data engineering tools such as Apache Airflow, PySpark, or Databricks. Familiarity with cloud computing platforms like Azure, AWS, or GCP, including services like Azure Data Factory, S3, Lambda, and BigQuery. Tool Exposure: Advanced understanding and hands-on experience with Git, Azure, Python, R programming and data engineering tools such as Snowflake, Databricks, or PySpark. Data mining, cleaning and engineering: Leading the identification and merging of relevant data sources, ensuring data quality, and resolving data inconsistencies. Cloud Solutions Architecture: Designing and deploying scalable data engineering workflows on cloud platforms such as Azure, AWS, or GCP. Data Analysis : Executing complex analyses against business requirements using appropriate tools and technologies. Software Development : Leading the development of reusable, version-controlled code under minimal supervision. Big Data Processing : Developing solutions to handle large-scale data processing using tools like Hadoop, Spark, or Databricks. Principal Duties & Key Responsibilities: Leading data extraction from multiple sources, including PDFs, images, databases, and APIs. Driving optical character recognition (OCR) processes to digitize data from images. Applying advanced natural language processing (NLP) techniques to understand complex data. Developing and implementing highly accurate statistical models and data engineering pipelines to support critical business decisions and continuously monitor their performance. Designing and managing scalable cloud-based data architectures using Azure, AWS, or GCP services. Collaborating closely with business domain experts to identify and drive key business value drivers. Documenting model design choices, algorithm selection processes, and dependencies. Effectively collaborating in cross-functional teams within the CoE and across the organization. Proactively seeking opportunities to contribute beyond assigned tasks. Required Competencies: Exceptional communication and interpersonal skills. Proficiency in Microsoft Office 365 applications. Ability to work independently, demonstrate initiative, and provide strategic guidance. Strong networking, communication, and people skills. Outstanding organizational skills with the ability to work independently and as part of a team. Excellent technical writing skills. Effective problem-solving abilities. Flexibility and adaptability to work flexible hours as required. Key competencies / Values: Client Focus : Tailoring skills and understanding client needs to deliver exceptional results. Excellence : Striving for excellence defined by clients, delivering high-quality work. Trust : Building and retaining trust with clients, colleagues, and partners. Teamwork : Collaborating effectively to achieve collective success. Responsibility : Taking ownership of performance and safety, ensuring accountability. People : Creating an inclusive environment that fosters individual growth and development.
Posted 1 month ago
8.0 - 10.0 years
8 - 18 Lacs
Gurugram
Work from Office
Oracle PL SQL Key Responsibilities: Develop and maintain complex PL/SQL procedures, packages, triggers, functions, and views in Oracle. Hands on In PostgreSQL and AWS is must Migrate and refactor PL/SQL logic and data from Oracle to PostgreSQL. Optimize SQL queries and ensure high-performance database access across platforms. Design and implement data models, schemas, and stored procedures in PostgreSQL. Work closely with application developers, data architects, and DevOps teams to ensure seamless database integration with applications. Develop scripts and utilities to automate database tasks, backups, and monitoring using AWS services. Leverage AWS cloud services such as RDS, S3, Lambda, and Glue for data processing and storage. Participate in code reviews, performance tuning, and troubleshooting of database-related issues. Ensure data integrity, consistency, and security across environments. Interested candidate, share the Resume (madhumithak@sightspectrum.in)
Posted 1 month ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Designation : Python + AWS Experience : 5+ Years Work Location : Bangalore / Mumbai Notice Period: Immediate Joiners/ Serving Notice Period Job Description : Mandatory Skills: Python Data structures pandas, numpy Data Operations - DataFrames, Dict, JSON, Lists, Tuples, Strings Oops & APIs(Flask/FastAPI) AWS services(IAM, EC2, Lambda, S3, DynamoDB, etc) Sincerely, Sonia TS
Posted 1 month ago
5.0 - 8.0 years
12 - 22 Lacs
Bengaluru
Work from Office
Role & responsibilities - Manage and monitor AWS cloud infrastructure, including EC2, S3, VPC, RDS, Lambda, and more. - Implement and maintain Ubuntu Linux servers and applications. - Monitor system performance, conduct backups, and address potential issues. - Set up and maintain MySQL databases, optimizing performance and ensuring data integrity. - Collaborate with development teams to design, develop, and deploy secure cloud-based applications. - Implement and maintain cloud security best practices. - Provide technical support and guidance on cloud infrastructure and related technologies. - Stay updated on industry trends and best practices. Preferred candidate profile - Bachelor's degree in Computer Science, IT, or related field. - 5-8 years of overall experience, with a minimum of 3 years in AWS cloud services. - Strong Ubuntu Linux administration skills. - Familiarity with AWS services and cloud security best practices. - Strong problem-solving skills and the ability to work independently and in a team. - Excellent communication skills. - Basic understanding of MySQL database administration is a plus. - Relevant AWS certifications are a plus.
Posted 1 month ago
5.0 - 10.0 years
7 - 14 Lacs
Chennai, Bengaluru
Work from Office
Job Summary Synechron is seeking a skilled Full Stack Developer to join our innovative technology team. This position focuses on designing, developing, and maintaining high-performance, scalable web applications using Next.js and related modern technologies. As a key contributor, you will collaborate with cross-disciplinary teams to deliver responsive and user-centric solutions that support the organizations digital growth and strategic objectives. Your expertise will help ensure the delivery of seamless, secure, and efficient web experiences for our clients and stakeholders. Software Requirements Required Skills and Experience: Proficiency in Next.js, React, and modern JavaScript/TypeScript frameworks Strong experience with .NET Core, C#, and building scalable web APIs Hands-on experience designing and consuming GraphQL APIs Practical knowledge of AWS services such as EC2, S3, Lambda, and RDS Familiarity with version control systems, particularly Git Experience with CI/CD pipelines and automation tools like Jenkins or TeamCity Working knowledge of Agile frameworks and tools such as Jira and Confluence Preferred Skills: Containerization skills with Docker and Kubernetes Knowledge of testing frameworks for unit and integration testing Understanding of security best practices and data protection regulations Overall Responsibilities Develop, enhance, and maintain web applications leveraging Next.js for front-end and .NET for back-end components Build, optimize, and consume RESTful and GraphQL APIs to enable efficient data exchange Deploy, monitor, and scale cloud-based applications using AWS services, ensuring high availability and performance standards Collaborate actively with UX/UI designers, product managers, and fellow developers to deliver high-quality solutions Participate in code reviews, pair programming, and the adoption of best coding practices Continuously evaluate emerging technologies and recommend improvements for application architecture and performance Contribute to project planning, documentation, and technical decision-making for application features and integrations Technical Skills (By Category) Programming Languages: Required: JavaScript (including TypeScript), C# Preferred: Additional JavaScript frameworks/libraries, such as Redux or MobX Databases / Data Management: Required: Experience with relational databases such as MSSQL or Oracle, and NoSQL solutions like MongoDB Cloud Technologies: Required: AWS (EC2, S3, Lambda, RDS) Preferred: Azure cloud platform expertise Frameworks and Libraries: Required: Next.js, React Preferred: State management libraries, testing frameworks like Jest or Mocha Development Tools and Methodologies: Required: Git, CI/CD tools (Jenkins, TeamCity), version control practices Preferred: Containerization with Docker, orchestration with Kubernetes Other: Familiarity with Agile/Scrum processes using Jira and Confluence Security & Compliance: Understanding of secure coding practices, data privacy, and compliance regulations relevant to web development Experience Requirements 5 to 12 years of experience in full-stack web development, with demonstrable expertise in Next.js and .NET technologies Proven track record in developing scalable, production-grade web applications Experience working within Agile environments, participating in sprint planning and continuous delivery Industry experience in fintech, e-commerce, or enterprise solutions is a plus but not mandatory Prior leadership or mentoring experience is advantageous Day-to-Day Activities Architect, develop, and maintain feature-rich, responsive web applications Collaborate with cross-functional teams on feature design, implementation, and testing Develop and optimize APIs and facilitate data integration across systems Conduct code reviews, unit testing, and performance tuning to ensure code quality Manage deployment processes and monitor application health in cloud environments Engage in regular stand-ups, planning sessions, and technical discussions Identify, troubleshoot, and resolve software defects and performance issues promptly Qualifications Bachelors or Masters degree in Computer Science, Software Engineering, Information Technology, or related field Certifications in cloud technologies (e.g., AWS Certified Solutions Architect) or web development are a plus Evidence of continuous learning through industry certifications, courses, or self-driven projects Strong portfolio demonstrating previous work with Next.js, React, and cloud-based application deployment Professional Competencies Strong analytical and problem-solving skills to address complex technical challenges Effective communication and stakeholder management abilities Leadership qualities in mentoring team members and driving technical discussions Ability to adapt quickly to changing project requirements and technological advances Innovation-driven mindset to explore new tools, frameworks, and best practices Strong organizational skills for managing multiple tasks and meeting deadlines
Posted 1 month ago
4.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Machine Learning & Data Pipelines Strong understanding of Machine Learning principles, lifecycle, and deployment practices Experience in designing and building ML pipelines Knowledge of deploying ML models on AWS Lambda, EKS, or other relevant services Working knowledge of Apache Airflow for orchestration of data workflows Proficiency in Python for scripting, automation, and ML model development with Data Scientists Basic understanding of SQL for querying and data analysis Cloud and DevOps Experience Hands-on experience with AWS services, including but not limited to: AWS Glue, Lambda, S3, SQS, SNS Proficient in checking and interpreting CloudWatch logs and setting up alarm. Infrastructure as Code (IaC) experience using Terraform Experience with CI/CD pipelines, particularly using GitLab for code and infrastructure deployments Understanding of cloud cost optimization and budgeting, with the ability to assess cost implications of various AWS services
Posted 1 month ago
8.0 - 12.0 years
25 - 40 Lacs
Chennai
Work from Office
We are seeking a highly skilled Data Architect to design and implement robust, scalable, and secure data solutions on AWS Cloud. The ideal candidate should have expertise in AWS services, data modeling, ETL processes, and big data technologies, with hands-on experience in Glue, DMS, Python, PySpark, and MPP databases like Snowflake, Redshift, or Databricks. Key Responsibilities: Architect and implement data solutions leveraging AWS services such as EC2, S3, IAM, Glue (Mandatory), and DMS for efficient data processing and storage. Develop scalable ETL pipelines using AWS Glue, Lambda, and PySpark to support data transformation, ingestion, and migration. Design and optimize data models following Medallion architecture, Data Mesh, and Enterprise Data Warehouse (EDW) principles. Implement data governance, security, and compliance best practices using IAM policies, encryption, and data masking. Work with MPP databases such as Snowflake, Redshift, or Databricks, ensuring performance tuning, indexing, and query optimization. Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to design efficient data integration strategies. Ensure high availability and reliability of data solutions by implementing monitoring, logging, and automation in AWS. Evaluate and recommend best practices for ETL workflows, data pipelines, and cloud-based data warehousing solutions. Troubleshoot performance bottlenecks and optimize query execution plans, indexing strategies, and data partitioning. Job Requirement Required Qualifications & Skills: Strong expertise in AWS Cloud Services: Compute (EC2), Storage (S3), and security (IAM). Proficiency in programming languages: Python, PySpark, and AWS Lambda. Mandatory experience in ETL tools: AWS Glue and DMS for data migration and transformation. Expertise in MPP databases: Snowflake, Redshift, or Databricks; knowledge of RDBMS (Oracle, SQL Server) is a plus. Deep understanding of data modeling techniques: Medallion architecture, Data Mesh, EDW principles. Experience in designing and implementing large-scale, high-performance data solutions. Strong analytical and problem-solving skills, with the ability to optimize data pipelines and storage solutions. Excellent communication and collaboration skills, with experience working in agile environments. Preferred Qualifications: AWS Certification (AWS Certified Data Analytics, AWS Certified Solutions Architect, or equivalent). Experience with real-time data streaming (Kafka, Kinesis, or similar). Familiarity with Infrastructure as Code (Terraform, CloudFormation). Understanding of data governance frameworks and compliance standards (GDPR, HIPAA, etc.
Posted 1 month ago
5.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Strong experience with Python, SQL, pySpark, AWS Glue. Good to have - Shell Scripting, Kafka Good knowledge of DevOps pipeline usage (Jenkins, Bitbucket, EKS, Lightspeed) Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Orchestration using Airflow Good to have - Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming Good debugging skills Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehouse architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Experience in Insurance domain preferred.
Posted 1 month ago
5.0 - 7.0 years
12 - 18 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are hiring an experienced Integration Engineer with deep expertise in Dell Boomi and proven skills in Python, AWS, and automation frameworks. This role focuses on building and maintaining robust integration pipelines between enterprise systems like Salesforce, Snowflake, and EDI platforms, enabling seamless data flow and test automation. Key Responsibilities: Design, develop, and maintain integration workflows using Dell Boomi. Build and enhance backend utilities and services using Python to support Boomi integrations. Integrate test frameworks with AWS services such as Lambda, API Gateway, CloudWatch, etc. Develop utilities for EDI document automation (e.g., generating and validating EDI 850 purchase orders). Perform data syncing and transformation between systems like Salesforce, Boomi, and Snowflake. Automate post-test data cleanup and validation within Salesforce using Boomi and Python. Implement infrastructure-as-code using Terraform to manage cloud resources. Create and execute API tests using Postman, and automate test cases using Cucumber and Gherkin. Integrate test results into Jira and X-Ray for traceability and reporting. Must-Have Qualifications: 5 to 7 years of professional experience in software or integration development. Strong hands-on experience with Dell Boomi (Atoms, Integration Processes, Connectors, APIs). Solid programming experience with Python. Experience working with AWS services: Lambda, API Gateway, CloudWatch, S3, etc. Working knowledge of Terraform for cloud infrastructure automation. Familiarity with SQL and modern data platforms (e.g., Snowflake). Experience working with Salesforce and writing SOQL queries. Understanding of EDI document standards and related integration use cases. Test automation experience using Cucumber, Gherkin, Postman. Integration of QA/test reports with Jira, X-Ray, or similar platforms. Familiarity with CI/CD tools like GitHub Actions, Jenkins, or similar. Tools & Technologies: Integration: Dell Boomi, REST/SOAP APIs Languages: Python, SQL Cloud: AWS (Lambda, API Gateway, CloudWatch, S3) Infrastructure: Terraform Data Platforms: Snowflake, Salesforce Automation & Testing: Cucumber, Gherkin, Postman DevOps: Git, GitHub Actions Tracking/Reporting: Jira, X-Ray Location-Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad
Posted 1 month ago
5.0 - 8.0 years
5 - 11 Lacs
Noida
Work from Office
Key Responsibilities: Cloud Infrastructure Management : Design, deploy, and maintain AWS cloud environments, leveraging a wide range of AWS services to meet business requirements for performance, scalability, and cost optimization. Security & Compliance : Implement and manage security controls, compliance frameworks, and cloud security best practices, including access control, encryption, monitoring, and threat detection. High-Availability Solutions : Develop and manage highly available, scalable cloud platforms to ensure minimal downtime and maximum reliability. Network Architecture : Create and maintain network diagrams and cloud architecture documentation, illustrating detailed system designs, security postures, and data flow. SIEM Integration : Integrate and manage Security Information and Event Management (SIEM) tools to enhance cloud security monitoring and response capabilities. Automation & Optimization : Automate infrastructure deployments and cloud management tasks using Infrastructure as Code (IaC) tools like AWS CloudFormation, Terraform, or equivalent technologies. Monitoring & Performance Tuning : Monitor cloud environments, identify performance bottlenecks, and optimize cloud resource usage and configuration for cost and efficiency. Collaboration : Work closely with cross-functional teams, including developers, operations, and security, to ensure seamless cloud integration and optimal system performance. Incident Response : Support and lead incident response activities related to cloud security and operational issues, ensuring quick resolution and root cause analysis.
Posted 1 month ago
10.0 - 15.0 years
12 - 17 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 11 The Team : We are looking for a highly motivated, enthusiastic, and skilled engineering lead for Commodity Insights. We strive to deliver solutions that are sector-specific, data-rich, and hyper-targeted for evolving business needs. Our Software development Leaders are involved in the full product life cycle, from design through release. The resource would be joining a strong innovative team working on the content management platforms which support a large revenue stream for S&P Commodity Insights. Working very closely with the Product owner and Development Manager, teams are responsible for the development of user enhancements and maintaining good technical hygiene. The successful candidate will assist in the design, development, release and support of content platforms. Skills required include ReactJS, Spring Boot, RESTful microservices, AWS services (S3, ECS, Fargate, Lambda, etc.), CSS HTML, AJAX JSON, XML and SQL (PostgreSQL/Oracle), . The candidate should be aware of GEN AI or LLM models like Open AI and Claude etc. The candidate should be enthusiast in working on prompt building related to GenAI and business-related prompts. The candidate should be able to develop and optimize prompts for AI models to improve accuracy and relevance. The candidate must be able to work well with a distributed team, demonstrate an ability to articulate technical solutions for business requirements, have experience with content management/packaging solutions, and embrace a collaborative approach for the implementation of solutions. Responsibilities : Lead and mentor a team through all phases of the software development lifecycle, adhering to agile methodologies (Analyze, design, develop, test, debug, and deploy). Ensure high-quality deliverables and foster a collaborative environment. Be proficient with the use of developer tools supporting the CI/CD process including configuring and executing automated pipelines to build and deploy software components Actively contribute to team planning and ceremonies and commit to team agreement and goals Ensure code quality and security by understanding vulnerability patterns, running code scans, and be able to remediate issues. Mentoringthe junior developers. Make sure that code review tasks on all user storiesare added and timely completed. Perform reviews and integration testing to assure quality of project development eorts Design database schemas, conceptual data models, UI workows and application architectures that t into the enterprise architecture Support the user base, assisting with tracking down issues and analyzing feedback to identify product improvements Understand and commit to the culture of S&P Global: the vision, purpose and values of the organization Basic Qualifications : 10+ years experience in an agile team development role, delivering software solutions using Scrum Java, J2EE, Javascript, CSS/HTML, AJAX ReactJS, Spring Boot, Microservices, RESTful services, OAuth XML, JSON, data transformation SQL and NoSQL Databases (Oracle, PostgreSQL) Working knowledge of Amazon Web Services (Lambda, Fargate, ECS, S3, etc.) Experience on GEN AI or LLM models like Open AI and Claude is preferred. Experience with agile workflow tools (e.g. VSTS, JIRA) Experience with source code management tools (e.g. git), build management tools (e.g. Maven) and continuous integration/delivery processes and tools (e.g. Jenkins, Ansible) Self-starter able to work to achieve objectives with minimum direction Comfortable working independently as well as in a team Excellent verbal and written communication skills Preferred Qualifications: Analysis of business information patterns, data analysis and data modeling Working with user experience designers to deliver end-user focused benefits realization Familiar with containerization (Docker, Kubernetes) Messaging/queuing solutions (Kafka, etc.) Familiar with application security development/operations best practices (including static/dynamic code analysis tools) About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. Were a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit . Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & Wellness: Health care coverage designed for the mind and body. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)
Posted 1 month ago
4.0 - 7.0 years
5 - 16 Lacs
Hyderabad, Bengaluru
Work from Office
Roles and Responsibilities : Design, develop, test, deploy and maintain Snowflake data warehouses for clients. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions. Develop ETL processes using Python scripts to extract data from various sources and load it into Snowflake tables. Troubleshoot issues related to Snowflake performance tuning, query optimization, and data quality. Job Requirements : 4-7 years of experience in developing large-scale data warehouses on AWS using Snowflake. Strong understanding of Lambda expressions in Snowflake SQL language. Experience with Python programming language for ETL development.
Posted 1 month ago
7.0 - 12.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Labcorp is hiring a Senior Data engineer. This person will be an integrated member of Labcorp Data and Analytics team and work within the IT team. Play a crucial role in designing, developing and maintaining data solutions using Databricks, Fabric, Spark, PySpark and Python. Responsible to review business requests and translate them into technical solution and technical specification. In addition, work with team members to mentor fellow developers to grow their knowledge and expertise. Work in a fast paced and high-volume processing environment, where quality and attention to detail are vital. RESPONSIBILITIES: Design and implement end-to-end data engineering solutions by leveraging the full suite of Databricks, Fabric tools, including data ingestion, transformation, and modeling. Design, develop and maintain end-to-end data pipelines by using spark, ensuring scalability, reliability, and cost optimized solutions. Conduct performance tuning and troubleshooting to identify and resolve any issues. Implement data governance and security best practices, including role-based access control, encryption, and auditing. Work in fast-paced environment and perform effectively in an agile development environment. REQUIREMENTS: 8+ years of experience in designing and implementing data solutions with at least 4+ years of experience in data engineering. Extensive experience with Databricks, Fabric, including a deep understanding of its architecture, data modeling, and real-time analytics. Minimum 6+ years of experience in Spark, PySpark and Python. Must have strong experience in SQL, Spark SQL, data modeling & RDBMS concepts. Strong knowledge of Data Fabric services, particularly Data engineering, Data warehouse, Data factory, and Real- time intelligence. Strong problem-solving skills, with ability to perform multi-tasking. Familiarity with security best practices in cloud environments, Active Directory, encryption, and data privacy compliance. Communicate effectively in both oral and written. Experience in AGILE development, SCRUM and Application Lifecycle Management (ALM). Preference given to current or former Labcorp employees. EDUCATION: Bachelors in engineering, MCA.
Posted 1 month ago
10.0 - 15.0 years
15 - 30 Lacs
Noida, Pune, Bengaluru
Work from Office
Roles and responsibilities Work closely with the Product Owners and stake holders to design the Technical Architecture for data platform to meet the requirements of the proposed solution. Work with the leadership to set the standards for software engineering practices within the machine learning engineering team and support across other disciplines Play an active role in leading team meetings and workshops with clients. Choose and use the right analytical libraries, programming languages, and frameworks for each task. Help the Data Engineering team produce high-quality code that allows us to put solutions into production Create and own the technical product backlogs for products, help the team to close the backlogs in right time. Refactor code into reusable libraries, APIs, and tools. Help us to shape the next generation of our products. What Were Looking For Total experience in data management area for 10 + years’ experience in the implementation of modern data ecosystems in AWS/Cloud platforms. Strong experience with AWS ETL/File Movement tools (GLUE, Athena, Lambda, Kinesis and other AWS integration stack) Strong experience with Agile Development, SQL Strong experience with Two or Three AWS database technologies (Redshift, Aurora, RDS,S3 & other AWS Data Service ) covering security, policies, access management Strong programming Experience with Python and Spark Strong learning curve for new technologies Experience with Apache Airflow & other automation stack. Excellent with Data Modeling. Excellent oral and written communication skills. A high level of intellectual curiosity, external perspective, and innovation interest Strong analytical, problem solving and investigative skills Experience in applying quality and compliance requirements. Experience with security models and development on large data sets
Posted 1 month ago
4.0 - 7.0 years
12 - 20 Lacs
Hyderabad, Pune, Delhi / NCR
Work from Office
Role and Responsibilities Managing the complete software development process from conception to deployment Maintaining and upgrading the software following deployment Managing the end-to-end life cycle to produce software and application. Overseeing and guiding the analysing, writing, building, and deployment of software Overseeing the automated testing and providing feedback to management during the development process Modifying and testing changes to previously developed programs Skills and Experience 3+ years of experience in developing enterprise level applications using React,Javascript,Typescript,HTML,CSS. 1+ years of experience in AWS services (Lambda,Ec2 etc) Strong proficiency in JavaScript, object model, DOM manipulation and event handlers, data structures and Complete understanding of Virtual DOM, component lifecycle, REST API integration etc. Experience in writing UI test cases Excellent verbal and written communication and collaboration skills to effectively communicate with both business and technical teams. Comfortable working in a fast-paced, result-oriented environment. Experience in Leading team. Preferred candidate profile
Posted 1 month ago
9.0 - 14.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Qualifications/Skill Sets: Experience 8+ years of experience in software engineering with at least 3+ years as a Staff Engineer or Technical Lead level. Architecture Expertise: Proven track record designing and building large-scale, multi-tenant SaaS applications on cloud platforms (e.g., AWS, Azure, GCP). Tech Stack: Expertise in modern backend languages (e.g., Java, Python, Go, Node.js), frontend frameworks (e.g., React, Angular), and database systems (e.g., PostgreSQL, MySQL, NoSQL). Cloud & Infrastructure: Strong knowledge of containerization (Docker, Kubernetes), serverless architectures, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform, CloudFormation). End to end development and deployment experience in cloud applications Distributed Systems: Deep understanding of event-driven architecture, message queues (e.g., Kafka, RabbitMQ), and microservices. Security: Strong focus on secure coding practices and familiarity with identity management (OAuth2, SAML) and data encryption. Communication: Excellent verbal and written communication skills with the ability to present complex technical ideas to stakeholders. Problem Solving: Strong analytical mindset and a proactive approach to identifying and solving system bottlenecks.
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will be responsible for designing, building, and maintaining scalable, secure, and reliable AWS cloud infrastructure. This is a hands-on engineering role requiring deep expertise in Infrastructure as Code (IaC), automation, cloud networking, and security . The ideal candidate should have strong AWS knowledge and be capable of writing and maintaining Terraform, CloudFormation, and CI/CD pipelines to streamline cloud deployments. Please note, this is an onsite role based in Hyderabad. Roles & Responsibilities: AWS Infrastructure Design & Implementation Architect, implement, and manage highly available AWS cloud environments . Design VPCs, Subnets, Security Groups, and IAM policies to enforce security standard processes. Optimize AWS costs using reserved instances, savings plans, and auto-scaling . Infrastructure as Code (IaC) & Automation Develop, maintain, and enhance Terraform & CloudFormation templates for cloud provisioning. Automate deployment, scaling, and monitoring using AWS-native tools & scripting. Implement and manage CI/CD pipelines for infrastructure and application deployments. Cloud Security & Compliance Enforce standard processes in IAM, encryption, and network security. Ensure compliance with SOC2, ISO27001, and NIST standards. Implement AWS Security Hub, GuardDuty, and WAF for threat detection and response. Monitoring & Performance Optimization Set up AWS CloudWatch, Prometheus, Grafana, and logging solutions for proactive monitoring. Implement autoscaling, load balancing, and caching strategies for performance optimization. Solve cloud infrastructure issues and conduct root cause analysis. Collaboration & DevOps Practices Work closely with software engineers, SREs, and DevOps teams to support deployments. Maintain GitOps standard processes for cloud infrastructure versioning. Support on-call rotation for high-priority cloud incidents. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 4 to 6 years of experience in computer science, IT, or related field with hands-on cloud experience OR Bachelors degree and 6 to 8 years of experience in computer science, IT, or related field with hands-on cloud experience OR Diploma and 10 to 12 years of experience in computer science, IT, or related field with hands-on cloud experience Must-Have Skills: Deep hands-on experience with AWS (EC2, S3, RDS, Lambda, VPC, IAM, ECS/EKS, API Gateway, etc.) . Expertise in Terraform & CloudFormation for AWS infrastructure automation. Strong knowledge of AWS networking (VPC, Direct Connect, Transit Gateway, VPN, Route 53) . Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (Jenkins, GitHub Actions, CodePipeline, etc.) . Strong troubleshooting and debugging skills in cloud networking, storage, and security . Preferred Qualifications: Good-to-Have Skills: Experience with Kubernetes (EKS) and service mesh architectures . Knowledge of AWS Lambda and event-driven architectures . Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) . Familiarity with HPC, DGX Cloud . Professional Certifications (preferred): AWS Certified Solutions Architect Associate or Professional AWS Certified DevOps Engineer Professional Terraform Associate Certification Soft Skills: Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams Effective communication and collaboration with cross-functional teams. Ability to work in a fast-paced, cloud-first environment.
Posted 1 month ago
1.0 - 3.0 years
3 - 7 Lacs
Hyderabad
Work from Office
The role is responsible for designing, developing, and maintaining software solutions for Research scientists. Additionally, it involves automating operations, monitoring system health, and responding to incidents to minimize downtime. You will join a multi-functional team of scientists and software professionals that enables technology and data capabilities to evaluate drug candidates and assess their abilities to affect the biology of drug targets. This team implements scientific software platforms that enable the capture, analysis, storage, and reporting for our Large Molecule Discovery Research team (Design, Make, Test and Analyze processes). The team also interfaces heavily with teams supporting our in vitro assay management systems and our compound inventory platforms. The ideal candidate possesses experience in the pharmaceutical or biotech industry, strong technical skills, and full stack software engineering experience (spanning SQL, back-end, front-end web technologies, automated testing). Roles & Responsibilities: Work closely with product team, business team including scientists, and other collaborators Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Conduct code reviews to ensure code quality and adherence to standard methodologies Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently Stay updated with the latest technology and security trends and advancements What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is a with these qualifications. Basic Qualifications: RMasters degree with 1 - 3 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 7 - 9 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 1+ years of experience in implementing and supporting biopharma scientific software platforms Functional Skills: Proficient in Java or Python Proficient in at least one JavaScript UI Framework (e.g. ExtJS, React, or Angular) Proficient in SQL (e.g. Oracle, PostgreSQL, Databricks) Preferred Qualifications: Experience with event-based architecture and serverless AWS services such as EventBridge, SQS, Lambda or ECS. Experience with Benchling Hands-on experience with Full Stack software development Strong understanding of software development methodologies, mainly Agile and Scrum Working experience with DevOps practices and CI/CD pipelines Experience of infrastructure as code (IaC) tools (Terraform, CloudFormation) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience with automated testing tools and frameworks Experience with big data technologies (e.g., Spark, Databricks, Kafka) Experience with leveraging the use of AI-assistants (e.g. GitHub Copilot) to accelerate software development and improve code quality Professional Certifications : AWS Certified Cloud Practitioner preferred Soft Skills: Excellent problem solving, analytical, and troubleshooting skills Strong communication and interpersonal skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to learn quickly & work independently Team-oriented, with a focus on achieving team goals Ability to manage multiple priorities successfully Strong presentation and public speaking skills
Posted 1 month ago
6.0 - 11.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Looking for a skilled Senior Data Science Engineer with 6-12 years of experience to lead the development of advanced computer vision models and systems. The ideal candidate will have hands-on experience with state-of-the-art architectures and a deep understanding of the complete ML lifecycle. This position is based in Bengaluru. Roles and Responsibility Lead the development and implementation of computer vision models for tasks such as object detection, tracking, image retrieval, and scene understanding. Design and execute end-to-end pipelines for data preparation, model training, evaluation, and deployment. Perform fine-tuning and transfer learning on large-scale vision-language models to meet application-specific needs. Optimize deep learning models for edge inference (NVIDIA Jetson, TensorRT, OpenVINO) and real-time performance. Develop scalable and maintainable ML pipelines using tools such as MLflow, DVC, and Kubeflow. Automate experimentation and deployment processes using CI/CD workflows. Collaborate cross-functionally with MLOps, backend, and product teams to align technical efforts with business needs. Monitor, debug, and enhance model performance in production environments. Stay up-to-date with the latest trends in CV/AI research and rapidly prototype new ideas for real-world use. Job Requirements 6-7+ years of hands-on experience in data science and machine learning, with at least 4 years focused on computer vision. Strong experience with deep learning frameworks: PyTorch (preferred), TensorFlow, Hugging Face Transformers. In-depth understanding and practical experience with Class-incremental learning and lifelong learning systems. Proficient in Python, including data processing libraries like NumPy, Pandas, and OpenCV. Strong command of version control and reproducibility tools (e.g., MLflow, DVC, Weights & Biases). Experience with training and optimizing models for GPU inference and edge deployment (Jetson, Coral, etc.). Familiarity with ONNX, TensorRT, and model quantization/conversion techniques. Demonstrated ability to analyze and work with large-scale visual datasets in real-time or near-real-time systems. Experience working in fast-paced startup environments with ownership of production AI systems. Exposure to cloud platforms such as AWS (SageMaker, Lambda), GCP, or Azure for ML workflows. Experience with video analytics, real-time inference, and event-based vision systems. Familiarity with monitoring tools for ML systems (e.g., Prometheus, Grafana, Sentry). Prior work in domains such as retail analytics, healthcare, or surveillance/IoT-based CV applications. Contributions to open-source computer vision libraries or publications in top AI/ML conferences (e.g., CVPR, NeurIPS, ICCV). Comfortable mentoring junior engineers and collaborating with cross-functional stakeholders.
Posted 1 month ago
7.0 - 10.0 years
8 - 15 Lacs
Hyderabad, Bengaluru
Hybrid
Key Responsibilities: Use data mappings and models provided by the data modeling team to build robust Snowflake data pipelines . Design and implement pipelines adhering to 2NF/3NF normalization standards . Develop and maintain ETL processes for integrating data from multiple ERP and source systems . Build scalable and secure Snowflake data architecture supporting Data Quality (DQ) needs. Raise CAB requests via Carriers change process and manage production deployments . Provide UAT support and ensure smooth transition of finalized pipelines to support teams. Create and maintain comprehensive technical documentation for traceability and handover. Collaborate with data modelers, business stakeholders, and governance teams to enable DQ integration. Optimize complex SQL queries , perform performance tuning , and ensure data ops best practices . Requirements: Strong hands-on experience with Snowflake Expert-level SQL skills and deep understanding of data transformation Solid grasp of data architecture and 2NF/3NF normalization techniques Experience with cloud-based data platforms and modern data pipeline design Exposure to AWS data services like S3, Glue, Lambda, Step Functions (preferred) Proficiency with ETL tools and working in Agile environments Familiarity with Carrier CAB process or similar structured deployment frameworks Proven ability to debug complex pipeline issues and enhance pipeline scalability Strong communication and collaboration skills Role & responsibilities Preferred candidate profile
Posted 1 month ago
9.0 - 12.0 years
35 - 40 Lacs
Bengaluru
Work from Office
We are seeking an experienced AWS Architect with a strong background in designing and implementing cloud-native data platforms. The ideal candidate should possess deep expertise in AWS services such as S3, Redshift, Aurora, Glue, and Lambda, along with hands-on experience in data engineering and orchestration tools. Strong communication and stakeholder management skills are essential for this role. Key Responsibilities Design and implement end-to-end data platforms leveraging AWS services. Lead architecture discussions and ensure scalability, reliability, and cost-effectiveness. Develop and optimize solutions using Redshift, including stored procedures, federated queries, and Redshift Data API. Utilize AWS Glue and Lambda functions to build ETL/ELT pipelines. Write efficient Python code and data frame transformations, along with unit testing. Manage orchestration tools such as AWS Step Functions and Airflow. Perform Redshift performance tuning to ensure optimal query execution. Collaborate with stakeholders to understand requirements and communicate technical solutions clearly. Required Skills & Qualifications Minimum 9 years of IT experience with proven AWS expertise. Hands-on experience with AWS services: S3, Redshift, Aurora, Glue, and Lambda . Mandatory experience working with AWS Redshift , including stored procedures and performance tuning. Experience building end-to-end data platforms on AWS . Proficiency in Python , especially working with data frames and writing testable, production-grade code. Familiarity with orchestration tools like Airflow or AWS Step Functions . Excellent problem-solving skills and a collaborative mindset. Strong verbal and written communication and stakeholder management abilities. Nice to Have Experience with CI/CD for data pipelines. Knowledge of AWS Lake Formation and Data Governance practices.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough