Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 - 10.0 years
9 - 12 Lacs
Hyderabad
Work from Office
Your Responsibilities: Driving the capability building for resiliency testing targeted toward modernization initiatives, common capabilities framework, reference architectures Working closely with application development, IT Architecture, Application Resiliency Foundation and other key partners ensuring end-to-end Application resiliency while upholding ETE policy, procedures and standards Developing and supporting the ETE Resiliency services / work such as the Resiliency Test Scorecard, Failure Mode Analysis, Test Scenarios Improving, setting the direction for the resiliency test automation framework, publishing reusable artifacts to the Developer Marketplace Capture technical requirements, assessing capabilities and mapping to organizational resiliency principles to determine resiliency characteristics of applications. Chip in to strategy discussions and decisions on overall application design and best approach for implementing cloud, and on premises solutions. Focus on continuous improvement practices as the need arises to meet system resiliency imperatives. Define high availability and resilience standards and guidelines for embracing technologies from AWS and other service providers. Mitigates risk by following established procedures and supervising controls, spotting key errors and demonstrating strong ethical behavior. Experience: 7 + Years Location: Hyderabad Additional Information Talents Needed for Success: Minimum of 7 years of related experience Bachelors degree required; Masters preferred and/or equivalent experience Minimum of 3 years experience in testing / architecting and delivering cloud-based solutions Must have expertise with industry patterns, chaos engineering methodologies, and techniques across the disaster recovery subject areas Specialist in highly available architecture and solution implementation Chaos Engineering / Resiliency Testing experience for distributed applications using tools like Gremlin or Cavisson NetHavoc Enterprise Java technologies, tools and system architectures; Splunk and application monitoring tooling such as Dynatrace / AppDynamics
Posted 3 weeks ago
12 - 17 years
14 - 19 Lacs
Pune, Bengaluru
Work from Office
Project Role : Application Architect Project Role Description : Provide functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Must have skills : Manufacturing Operations Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : BTech BE Job Title:Industrial Data Architect Summary :We are seeking a highly skilled and experienced Industrial Data Architect with a proven track record in providing functional and/or technical expertise to plan, analyse, define and support the delivery of future functional and technical capabilities for an application or group of applications. Well versed with OT data quality, Data modelling, data governance, data contextualization, database design, and data warehousing. Must have Skills:Domain knowledge in areas of Manufacturing IT OT in one or more of the following verticals Automotive, Discrete Manufacturing, Consumer Packaged Goods, Life ScienceKey Responsibilities: Industrial Data Architect will be responsible for developing and overseeing the industrial data architecture strategies to support advanced data analytics, business intelligence, and machine learning initiatives. This role involves collaborating with various teams to design and implement efficient, scalable, and secure data solutions for industrial operations. Focused on designing, building, and managing the data architecture of industrial systems. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Own the offerings and assets on key components of data supply chain, data governance, curation, data quality and master data management, data integration, data replication, data virtualization. Create scalable and secure data structures, integrating with existing systems and ensuring efficient data flow. Qualifications: Data Modeling and Architecture:oProficiency in data modeling techniques (conceptual, logical, and physical models).oKnowledge of database design principles and normalization.oExperience with data architecture frameworks and methodologies (e.g., TOGAF). Database Technologies:oRelational Databases:Expertise in SQL databases such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server.oNoSQL Databases:Experience with at least one of the NoSQL databases like MongoDB, Cassandra, and Couchbase for handling unstructured data.oGraph Databases:Proficiency with at least one of the graph databases such as Neo4j, Amazon Neptune, or ArangoDB. Understanding of graph data models, including property graphs and RDF (Resource Description Framework).oQuery Languages:Experience with at least one of the query languages like Cypher (Neo4j), SPARQL (RDF), or Gremlin (Apache TinkerPop). Familiarity with ontologies, RDF Schema, and OWL (Web Ontology Language). Exposure to semantic web technologies and standards. Data Integration and ETL (Extract, Transform, Load):oProficiency in ETL tools and processes (e.g., Talend, Informatica, Apache NiFi).oExperience with data integration tools and techniques to consolidate data from various sources. IoT and Industrial Data Systems:oFamiliarity with Industrial Internet of Things (IIoT) platforms and protocols (e.g., MQTT, OPC UA).oExperience with either of IoT data platforms like AWS IoT, Azure IoT Hub, and Google Cloud IoT Core.oExperience working with one or more of Streaming data platforms like Apache Kafka, Amazon Kinesis, Apache FlinkoAbility to design and implement real-time data pipelines. Familiarity with processing frameworks such as Apache Storm, Spark Streaming, or Google Cloud Dataflow.oUnderstanding of event-driven design patterns and practices. Experience with message brokers like RabbitMQ or ActiveMQ.oExposure to the edge computing platforms like AWS IoT Greengrass or Azure IoT Edge AI/ML, GenAI:oExperience working on data readiness for feeding into AI/ML/GenAI applicationsoExposure to machine learning frameworks such as TensorFlow, PyTorch, or Keras. Cloud Platforms:oExperience with cloud data services from at least one of the providers like AWS (Amazon Redshift, AWS Glue), Microsoft Azure (Azure SQL Database, Azure Data Factory), and Google Cloud Platform (BigQuery, Dataflow). Data Warehousing and BI Tools:oExpertise in data warehousing solutions (e.g., Snowflake, Amazon Redshift, Google BigQuery).oProficiency with Business Intelligence (BI) tools such as Tableau, Power BI, and QlikView. Data Governance and Security:oUnderstanding of data governance principles, data quality management, and metadata management.oKnowledge of data security best practices, compliance standards (e.g., GDPR, HIPAA), and data masking techniques. Big Data Technologies:oExperience in big data platforms and tools such as Hadoop, Spark, and Apache Kafka.oUnderstanding of distributed computing and data processing frameworks. Excellent Communication:Superior written and verbal communication skills, with the ability to effectively articulate complex technical concepts to diverse audiences. Problem-Solving Acumen:A passion for tackling intricate challenges and devising elegant solutions. Collaborative Spirit:A track record of successful collaboration with cross-functional teams and stakeholders. Certifications:AWS Certified Data Engineer Associate / Microsoft Certified:Azure Data Engineer Associate / Google Cloud Certified Professional Data Engineer certification is mandatory Minimum of 14-18 years progressive information technology experience. Qualifications BTech BE
Posted 1 month ago
5 - 7 years
0 - 0 Lacs
Chennai
Work from Office
Job Title: Lead I - Software Engineering Hiring Location: Mumbai/Chennai/Gurgaon Job Summary: We are seeking a Lead I in Software Engineering with 4 to 7 years of experience in software development or software architecture. The ideal candidate will possess a strong background in Angular and Java, with the ability to lead a team and drive technical projects. A Bachelor's degree in Engineering or Computer Science, or equivalent experience, is required. Responsibilities: Interact with technical personnel and team members to finalize requirements. Write and review detailed specifications for the development of system components of moderate complexity. Collaborate with QA and development team members to translate product requirements into software designs. Implement development processes, coding best practices, and conduct code reviews. Operate in various development environments (Agile, Waterfall) while collaborating with key stakeholders. Resolve technical issues as necessary. Perform all other duties as assigned. Must-Have Skills: Strong proficiency in Angular 1.X (70% Angular and 30% Java OR 50% Angular and 50% Java). Java/J2EE; Familiarity with Singleton and MVC design patterns. Strong proficiency in SQL and/or MySQL, including optimization techniques (at least MySQL). Experience using tools such as Eclipse, GIT, Postman, JIRA, and Confluence. Knowledge of test-driven development. Solid understanding of object-oriented programming. Good-to-Have Skills: Expertise in Spring Boot, Microservices, and API development. Familiarity with OAuth2.0 patterns (experience with at least 2 patterns). Knowledge of Graph Databases (e.g., Neo4J, Apache Tinkerpop, Gremlin). Experience with Kafka messaging. Familiarity with Docker, Kubernetes, and cloud development. Experience with CI/CD tools like Jenkins and GitHub Actions. Knowledge of industry-wide technology trends and best practices. Experience Range: 4 to 7 years of relevant experience in software development or software architecture. Education: Bachelor's degree in Engineering, Computer Science, or equivalent experience. Additional Information: Strong communication skills, both oral and written. Ability to interface competently with internal and external technology resources. Advanced knowledge of software development methodologies (Agile, etc.). Experience in setting up and maintaining distributed applications in Unix/Linux environments. Ability to complete complex bug fixes and support production issues. Required Skills Angular 1.X,Java 11+,Sql
Posted 2 months ago
3 - 7 years
4 - 7 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Role Description: We are seeking a Senior Data Engineer with expertise in Graph Data technologies to join our data engineering team and contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a strong background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Own the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Strong understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master’s degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelor’s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly
Posted 2 months ago
4 - 8 years
5 - 15 Lacs
Chennai, Hyderabad
Work from Office
L2 Support Engineer (SRE Chaos Engineering) Area: Private cloud VMware, OpenStack, Kubernetes Linux, Monitoring, Reliability Engineering Defining & implementing practices in Resiliency Engineering, Automation, Observability & Chaos Testing while also engraining a proactive Chaos Culture that thinks reliability first design Scope of work • Supervise a team of SREs, ensuring that production applications which team supports are stable, reliable, and well documented. Own end to end availability and performance of mission critical service. Contributing to the design/architecture of the system. Analyze system architectures to identify single points of failure and other areas that may present a resiliency deficiency. Develop software to automate chaos and resiliency test cases that simulate failures in a system that performs financial data processing. Integrate Chaos engineering with CI/CD process. Establish a process to define a hypothesis around a steady-state and to simulate real-world events. Executing Game Days on mission critical applications. Identification of top errors, reliability issues and driving root cause to avoid repeat of incidents. Ability to analyze and debug complex issues across tiers from frontend to mid-tier to infrastructure. Hands on experience on any Chaos tool (Harness, Litmus, Gremlin, Chaos monkey, and ChaosBlade). Mindset to identify and explore chaotic situations and conduct formalized experiments. Experience with monitoring and logging tools (e. g. Datadog, ELK, Prometheus, Grafana). Experience with Kubernetes and Docker. Deep understanding of SRE concepts like SLAs, SLOs, SLIs, and error budgets. Experience working on cross department efforts by communicating and negotiating with multiple teams to accomplish goals. Expert with troubleshooting issues and bugs. Programming experience (Python/Go/shell). Experience in financial domain (desirable). Prior SRE/DevOps experience desirable. Skill Set " Experience in OS platforms (windows, linux, centos, ubuntu etc., ) highly skilled Site Reliability Engineer to join our Technology team and will be working as part of a cross-functional product team to create elegant solutions to highly complex and intricate business challenges. Ability to prioritize and multitask. Excellent communication and interpersonal skills
Posted 2 months ago
12 - 17 years
14 - 19 Lacs
Pune, Bengaluru
Work from Office
Project Role : Application Architect Project Role Description : Provide functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Must have skills : Manufacturing Operations Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : BTech BE Job Title:Industrial Data Architect Summary :We are seeking a highly skilled and experienced Industrial Data Architect with a proven track record in providing functional and/or technical expertise to plan, analyse, define and support the delivery of future functional and technical capabilities for an application or group of applications. Well versed with OT data quality, Data modelling, data governance, data contextualization, database design, and data warehousing. Must have Skills:Domain knowledge in areas of Manufacturing IT OT in one or more of the following verticals Automotive, Discrete Manufacturing, Consumer Packaged Goods, Life ScienceKey Responsibilities: Industrial Data Architect will be responsible for developing and overseeing the industrial data architecture strategies to support advanced data analytics, business intelligence, and machine learning initiatives. This role involves collaborating with various teams to design and implement efficient, scalable, and secure data solutions for industrial operations. Focused on designing, building, and managing the data architecture of industrial systems. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Own the offerings and assets on key components of data supply chain, data governance, curation, data quality and master data management, data integration, data replication, data virtualization. Create scalable and secure data structures, integrating with existing systems and ensuring efficient data flow. Qualifications: Data Modeling and Architecture:oProficiency in data modeling techniques (conceptual, logical, and physical models).oKnowledge of database design principles and normalization.oExperience with data architecture frameworks and methodologies (e.g., TOGAF). Database Technologies:oRelational Databases:Expertise in SQL databases such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server.oNoSQL Databases:Experience with at least one of the NoSQL databases like MongoDB, Cassandra, and Couchbase for handling unstructured data.oGraph Databases:Proficiency with at least one of the graph databases such as Neo4j, Amazon Neptune, or ArangoDB. Understanding of graph data models, including property graphs and RDF (Resource Description Framework).oQuery Languages:Experience with at least one of the query languages like Cypher (Neo4j), SPARQL (RDF), or Gremlin (Apache TinkerPop). Familiarity with ontologies, RDF Schema, and OWL (Web Ontology Language). Exposure to semantic web technologies and standards. Data Integration and ETL (Extract, Transform, Load):oProficiency in ETL tools and processes (e.g., Talend, Informatica, Apache NiFi).oExperience with data integration tools and techniques to consolidate data from various sources. IoT and Industrial Data Systems:oFamiliarity with Industrial Internet of Things (IIoT) platforms and protocols (e.g., MQTT, OPC UA).oExperience with either of IoT data platforms like AWS IoT, Azure IoT Hub, and Google Cloud IoT Core.oExperience working with one or more of Streaming data platforms like Apache Kafka, Amazon Kinesis, Apache FlinkoAbility to design and implement real-time data pipelines. Familiarity with processing frameworks such as Apache Storm, Spark Streaming, or Google Cloud Dataflow.oUnderstanding of event-driven design patterns and practices. Experience with message brokers like RabbitMQ or ActiveMQ.oExposure to the edge computing platforms like AWS IoT Greengrass or Azure IoT Edge AI/ML, GenAI:oExperience working on data readiness for feeding into AI/ML/GenAI applicationsoExposure to machine learning frameworks such as TensorFlow, PyTorch, or Keras. Cloud Platforms:oExperience with cloud data services from at least one of the providers like AWS (Amazon Redshift, AWS Glue), Microsoft Azure (Azure SQL Database, Azure Data Factory), and Google Cloud Platform (BigQuery, Dataflow). Data Warehousing and BI Tools:oExpertise in data warehousing solutions (e.g., Snowflake, Amazon Redshift, Google BigQuery).oProficiency with Business Intelligence (BI) tools such as Tableau, Power BI, and QlikView. Data Governance and Security:oUnderstanding of data governance principles, data quality management, and metadata management.oKnowledge of data security best practices, compliance standards (e.g., GDPR, HIPAA), and data masking techniques. Big Data Technologies:oExperience in big data platforms and tools such as Hadoop, Spark, and Apache Kafka.oUnderstanding of distributed computing and data processing frameworks. Excellent Communication:Superior written and verbal communication skills, with the ability to effectively articulate complex technical concepts to diverse audiences. Problem-Solving Acumen:A passion for tackling intricate challenges and devising elegant solutions. Collaborative Spirit:A track record of successful collaboration with cross-functional teams and stakeholders. Certifications:AWS Certified Data Engineer Associate / Microsoft Certified:Azure Data Engineer Associate / Google Cloud Certified Professional Data Engineer certification is mandatory Minimum of 14-18 years progressive information technology experience. Qualifications BTech BE
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Bengaluru
Work from Office
Description ***This SO# is creating backfill for Narayana Srivastsa*** RoleSRE Chaos Engineer Primary Skills - Dytrance, Datadog Secondary Skills - Aws Kubernetes Client Interview -Yes Must have skills experience3+ years of experience in implementing Observability solution for Technology and Business KPIs using tools like Dynatrace, Datadog, AppDynamics, New relic or Prometheus and Grafana. Experience in designing, automating, maintaining, and optimizing observability platforms (logging, metric and tracing) Experience in monitoring application and infrastructure performance, troubleshooting production issues and sharing RCA Experience of containers and Kubernetes cluster Experience in Enterprise scale data streaming pipelines like Kafka and Time Series databases Experience in AWS or Azure Cloud technologies Experience with recommending baseline monitoring thresholds, recommending performance monitoring KPIs and SLAs Experience in proving resiliency of platform through chaos engineering using tools like Gremlin, Chaos Monkey, Chaos Mesh etc. Good exposure to Resiliency, Incident management, Observability Best Practices and Troubleshooting The ability to guide and mentor other members within the team and improve the way to collaborate, learn, and share ideas Strong written and verbal communication skills are required Nice to have skills experienceFamiliar with automation, DevOps, CI/CD, IaC tools and practicesTerraform, Jenkins, Git, Ansible, Integrating with APIs Working understanding of good software, network, and systems design is preferred Experience in Jira, ServiceNow, or similar tools Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family To be defined Local Role Name To be defined Local Skills Datadog;AWS;Kubernate Languages RequiredENGLISH Role Rarity To Be Defined
Posted 2 months ago
6 - 11 years
20 - 35 Lacs
Pune
Work from Office
Performance Engineering &Testing: Work closely with solution architect & lead engineer and design performance tests. Exposure to tools like Jmeter,Loadrunner, Blazemeter would be needed Performance Diagnostics leveraging industry standard APM tooling Required Candidate profile Resilience Testing & Chaos Engineering:Ability to understand the technical architecture & work closely with solution architect & lead engineer to model failure scenarios & build resilience testing exp
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2