Jobs
Interviews

16 Gremlin Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 10.0 years

0 Lacs

karnataka

On-site

Our client values developer experience and quality infrastructure as crucial components in delivering high-performance, resilient, and secure data products. As the Engineering Manager for the Developer Experience & Services team, you will lead an essential engineering group dedicated to enhancing developer productivity, internal tooling, and quality assurance infrastructure. This role is a blend of platform engineering and quality engineering, where your team's focus will be on constructing systems, tools, and automation frameworks that drive engineering velocity, product reliability, and operational excellence. You will play a pivotal role in evolving the core developer platform and executing strategies for test infrastructure, performance benchmarking, fault tolerance verification, and chaos testing. In this leadership position, your responsibilities will include: - Leading and expanding a high-impact team responsible for developer experience, platform tooling, and quality infrastructure. - Owning and advancing the company-wide developer platform, encompassing internal tools for build and deployment, observability, monitoring, alerting, remote dev environments, local dev tooling, and engineering standards. - Developing quality assurance infrastructure such as scalable test automation frameworks, infrastructure for performance testing and benchmarking, chaos engineering and fault injection systems, and support for deployment strategies. - Driving the adoption of engineering best practices in testing, reliability, and continuous delivery. - Collaborating with engineers to identify and alleviate friction points through tooling and automation. - Defining metrics and SLAs for engineering productivity, test coverage, release confidence, and platform uptime to ensure continuous improvement. - Leading technical architecture discussions to ensure the scalability and maintainability of internal platforms and tooling. - Cultivating a culture of ownership, experimentation, and learning within the team. Key Requirements: - 10+ years of software engineering experience with a proven track record in building infrastructure or platforms. - At least 1 year in a team leadership or engineering management role. - Customer-centric mindset, growth mindset, and drive for impact. - Strong coding, design, and architectural skills to serve as a technical leader. - Analytical and problem-solving skills. - Proficiency in data-driven metrics for operational excellence. - Excellent oral and written communication skills. - Cross-team communication abilities with a focus on productivity and quality. - Familiarity with tools and frameworks like GitHub Actions, ArgoCD, Spinnaker, Jenkins, Pytest, Selenium, JUnit, JMeter, Locust, Chaos Mesh, Gremlin, Prometheus, Grafana, OpenTelemetry, Elastic Stack. If you have experience in DevX teams and are passionate about making a hands-on impact on transformative projects, please reach out to rajeshwari.vh@careerxperts.com.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

You should have at least 5 years of backend development experience with proficiency in Node.js, TypeScript, and NestJS. Strong skills in PostgreSQL, MongoDB, and Elasticsearch are required, along with experience in Neptune and Gremlin. Demonstrated team leadership and project management skills are essential, as well as a solid understanding of scalable backend systems and prior experience in architecture and database design. A Bachelor's degree in Computer Science or a related field is preferred, along with excellent problem-solving and communication abilities. As a Backend Lead, you will be responsible for leading a team of 5-6 backend engineers, developing scalable systems using Node.js, TypeScript, and NestJS, and managing and optimizing databases such as PostgreSQL and MongoDB. You will also work with Elasticsearch, Neptune, and Gremlin, conduct code reviews to ensure coding standards, collaborate with product managers and frontend teams, and oversee project planning and resource management. Location: Andheri, Mumbai The interview process will consist of a Screening Round, Technical R1 (Coding + DSA), Technical R2 (System Design + Architect), Technical R3, and a Final Round with the Founder.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

We are looking for a highly skilled Performance Testing Engineer with expertise in Apache JMeter to join our QA team. The ideal candidate will be responsible for designing and executing performance tests, as well as gathering performance requirements from stakeholders to ensure systems meet expected load, responsiveness, and scalability criteria. As a Performance Engineer at Boomi, you will validate and recommend performance optimizations in our computing infrastructure and software. You will collaborate with Product Development and Site Reliability Engineering teams on performance monitoring, tuning, and tooling. Your responsibilities will include analyzing software architecture, working on capacity planning, identifying KPIs, and designing scalability and resiliency tests using tools like JMeter, blazemeter, and Neoload. Essential requirements for this role include expertise in performance engineering fundamentals, monitoring performance using native Linux OS and APM tools, understanding AWS services for infrastructure analysis, and experience with tools like NewRelic and Splunk. You should also be skilled in analyzing heap dumps, thread dumps, and SQL slow query logs, and recommending optimal resource configurations in Cloud, Virtual Machine, and Container technologies. Desirable requirements include experience in writing custom monitoring tools using Java, Python, or similar languages, capacity planning using AI/ML, and performance tuning in Java or similar application code. At Boomi, we offer a culture of caring, continuous learning and development opportunities, interesting and meaningful work, balance and flexibility, and a high-trust environment. If you are passionate about solving challenging problems, working with cutting-edge technology, and making a real impact, we encourage you to explore a career with Boomi. Join us in Bangalore/Hyderabad, India, and be a part of our Performance, Scalability, and Resiliency(PSR) Engineering team to do the best work of your career and make a profound social impact.,

Posted 1 week ago

Apply

13.0 - 17.0 years

0 Lacs

karnataka

On-site

As a Head of Quality Assurance at Commcise located in Bangalore, you will play a crucial role in managing testing activities to ensure the best user product experience. With 13-15 years of relevant experience, you will need to have an Engineering or IT Degree. Your strong expertise in software testing concepts and methodologies, along with excellent communication skills and technical aptitude, especially in automation, will be essential for this role. Your responsibilities will include having a deep understanding of capital markets, trading platforms, wealth management, and regulatory frameworks such as MiFID, SEC, SEBI, FCA. Experience with financial instruments and post-trade processes will also be necessary. You will be required to define and implement comprehensive testing strategies covering functional and non-functional testing, as well as developing test governance models and enforcing QA best practices. Your role will involve a strong grasp of programming concepts, coding standards, and test frameworks like Java, Python, and JavaScript. Expertise in test automation frameworks such as Selenium and Appium, as well as API testing and knowledge of connectivity protocols, will be advantageous. Understanding AI and Machine Learning applications in test automation and driving AI-driven automation initiatives will be part of your responsibilities. Experience in continuous testing within CI/CD pipelines, knowledge of infrastructure as code and cloud platforms, and familiarity with observability tools for real-time monitoring will also be required. You should have expertise in performance testing tools, security testing methodologies, and experience with resilience testing and chaos engineering. Strong leadership skills, team development abilities, and stakeholder management across various teams will be crucial in this role. Having an Agile mindset, leading Agile testing transformations, and implementing BDD/TDD practices will be part of your responsibilities. Strong strategic planning and execution skills, along with a willingness to be hands-on when required, will be essential for driving collaborative test strategies. This role offers an opportunity to work in a dynamic environment and contribute significantly to ensuring the quality and reliability of products in the financial technology industry.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

chennai, tamil nadu

On-site

As an ITIDATA, an EXl Company, you will be responsible for utilizing Cypher or Gremlin query languages, Neo4J, Python, PySpark, Hive, and Hadoop to work on tasks related to graph theory. Specifically, your role will involve creating and managing knowledge graphs using Neo4J. We are seeking Neo4J Developers with 7-10 years of experience in data engineering, including 2-3 years of hands-on experience with Neo4J. If you are looking for an exciting opportunity in graph databases, this position is ideal for you. Key Skills & Responsibilities: - Expertise in Cypher or Gremlin query languages - Strong understanding of graph theory - Experience in creating and managing knowledge graphs using Neo4J - Optimizing performance and scalability of graph databases - Researching & implementing new technology solutions - Working with application teams to integrate graph database solutions Candidates who can be available immediately or within 30 days will be given preference. Join us and be a part of our dynamic team working on cutting-edge graph database technologies.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

chennai, tamil nadu

On-site

As an ITIDATA, an EXl Company, you will be responsible for tasks including working with Cypher or Gremlin query languages, Neo4J, Python, PySpark, Hive, and Hadoop. Your expertise in graph theory will be utilized to create and manage knowledge graphs using Neo4J effectively. In this role, we are looking for Neo4J Developers with 7-10 years of experience in data engineering, specifically with 2-3 years of hands-on experience with Neo4J. If you are seeking an exciting opportunity in graph databases, this position offers the chance to work on optimizing performance and scalability of graph databases, as well as researching and implementing new technology solutions. Key Skills & Responsibilities: - Expertise in Cypher or Gremlin query languages - Strong understanding of graph theory - Experience in creating and managing knowledge graphs using Neo4J - Optimizing performance and scalability of graph databases - Researching & implementing new technology solutions - Working with application teams to integrate graph database solutions We are looking for candidates who are available immediately or within 30 days to join our team and contribute to our dynamic projects.,

Posted 2 weeks ago

Apply

3.0 - 5.0 years

4 - 8 Lacs

Bengaluru

Work from Office

The purpose of this role is to prepare test cases and perform testing of the product/ platform/ solution to be deployed at a client end and ensure its meet 100% quality assurance parameters. Job Description for Performance Engineering Good hands-on experience with scripting with tools like JMeter (Mandatory), LoadRunner, Neoload, any mobile performance tool Should have worked in protocols like Web (HTTP/HTML), Web service, SAP-Web, SAP-GUI, True Client & Mobile protocols etc., Should have tested applications like .NET, JAVA, SAP Web, SAP GUI, MQ etc., Ability to write user defined functions/custom code to automate the script challenges Should have experience on APM tools Dynatrace (Mandatory) , App Dynamics, Splunk, New Relic, Wily, etc. Should have experience in Chaos Engineering using tools like Gremlin, Chaos Monkey, Chaos Mesh etc. Should have worked on Heap and thread dump analysis using any tool Should have knowledge on JVM, CLR Should have performed early performance testing Good knowledge in monitoring (Client side, Server Side, DB, Network and Load Balancer) Should have worked on Unix/Linux commands like VMSTAT, NMON etc., Should have written SQL queries, used profiling tools like SQL Profiler Should have demonstrated the performance reports to Clients with detailed inferences Good knowledge on Server Tuning & Optimization Good knowledge on Capacity Planning Optional: Should have knowledge in any programing languages like Core Java, Python, Shell etc., Should have worked on ALM/PC/QC/JIRA etc., Should have developed automated utilities to analyze the logs/reports Install, configure, maintain and administer the performance tools Mandatory Skills: Performance Testing.: Experience: 3-5 Years.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Coimbatore

Work from Office

The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. JoB decription Job Description for Performance Engineering Minimum work experience: 5 - 12 Years Good hands-on experience with scripting with tools like JMeter (Mandatory), LoadRunner, Neoload, any mobile performance tool Should have worked in protocols like Web (HTTP/HTML), Web service, SAP-Web, SAP-GUI, True Client & Mobile protocols etc., Should have tested applications like .NET, JAVA, SAP Web, SAP GUI, MQ etc., Ability to write user defined functions/custom code to automate the script challenges Should have experience on APM tools Dynatrace (Mandatory) , App Dynamics, Splunk, New Relic, Wily, etc. Should have experience in Chaos Engineering using tools like Gremlin, Chaos Monkey, Chaos Mesh etc. Should have worked on Heap and thread dump analysis using any tool Should have knowledge on JVM, CLR Should have performed early performance testing Good knowledge in monitoring (Client side, Server Side, DB, Network and Load Balancer) Should have worked on Unix/Linux commands like VMSTAT, NMON etc., Should have written SQL queries, used profiling tools like SQL Profiler Should have demonstrated the performance reports to Clients with detailed inferences Good knowledge on Server Tuning & Optimization Good knowledge on Capacity Planning Optional: Should have knowledge in any programing languages like Core Java, Python, Shell etc., Should have worked on ALM/PC/QC/JIRA etc., Should have developed automated utilities to analyze the logs/reports Install, configure, maintain and administer the performance tools Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: Performance Testing.: Experience: 5-8 Years.

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Lead Backend Engineer at our dynamic tech company, you will play a pivotal role in shaping the backend architecture and leading a talented team of 5-6 engineers. Your expertise in a diverse tech stack and your leadership skills will be essential in driving our backend projects to success. You will guide and mentor a team of 5-6 backend engineers, ensuring the team delivers high-quality code, adheres to best practices, and meets project deadlines. Your hands-on development using TypeScript, Node.js, and Nestjs will be instrumental in building robust and scalable backend systems. Proficiency in managing databases like PostgreSQL and MongoDB will help you implement efficient data storage and retrieval strategies. Additionally, your expertise in Elasticsearch, Neptune, and Gremlin will enable you to handle complex data structures and relationships effectively. In this role, you will conduct code reviews, enforce coding standards, and maintain high-quality software. Collaboration with frontend teams, designers, and product managers will be crucial to ensure seamless integration and alignment with business goals. Furthermore, you will be responsible for planning, tracking, and reporting on project progress while effectively managing resources to meet deadlines. To excel in this position, you should possess a Bachelor's degree in Computer Science or a related field, along with a minimum of 5 years of experience in backend development. Strong proficiency in TypeScript, Node.js, Next.js, PostgreSQL, MongoDB, Elasticsearch, Neptune, and Gremlin is required. Your proven experience in leading a team of engineers, excellent problem-solving skills, attention to detail, and strong communication and collaboration skills will be essential. Experience in a fast-paced, agile environment and prior experience in a similar lead role are preferred. Contributions to open-source projects or a strong GitHub portfolio will be considered a plus.,

Posted 3 weeks ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line Quality Responsibilities Responsibilities:1.Design and Implement Chaos Experiments Candidates need to Develop and execute chaos experiments to simulate real-world failures and identify vulnerabilities in our systems. Define clear objectives, hypotheses, and success metrics for each experiment. Document experiment procedures, results, and lessons learned.2.Experience in using Chaos Engineering Tools(the below highlighted tools are mandatory skills. Combination of multiple tools is added advantage.) Candidate should be proficiently use and manage chaos engineering tools such as Chaos Monkey, Gremlin, ToxiProxy, Chaos Mesh, Chaos Blade, Azure Chaos Studio, and AWS Fault Injection Simulator, and any other chaos engineering tools.3.Cloud Platform Chaos Engineering Design and execute chaos experiments within cloud environments (Azure, AWS and GCP) using Azure Chaos Studio and AWS Fault Injection Simulator or any other Chaos Engineering tool.4.Documentation and Reporting Maintain detailed documentation of chaos experiments, procedures, and results. Generate reports and present findings to stakeholders. Preferred Skills: Technology-Machine Learning-Python Technology-Cloud Platform-Azure IOT Technology-Cloud Platform-GCP Devops

Posted 3 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are looking for a Svelte Developer to build lightweight, reactive web applications with excellent performance and maintainability. Key Responsibilities: Design and implement applications using Svelte and SvelteKit. Build reusable components and libraries for future use. Optimize applications for speed and responsiveness. Collaborate with design and backend teams to create cohesive solutions. Required Skills & Qualifications: 8+ years of experience with Svelte or similar reactive frameworks. Strong understanding of JavaScript, HTML, CSS, and reactive programming concepts. Familiarity with SSR and JAMstack architectures. Experience integrating RESTful APIs or GraphQL endpoints. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 4 weeks ago

Apply

4.0 - 8.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Profile : Resiliency Testing Experience: 4.5 - 8 Years Location: Bangalore Notice Period: max 30 days Minimum of 4.5 years of related experience Bachelor's degree preferred or equivalent experience Basic java / Selenium development skills with significant experience applying those skills in test environments. Chaos Engineering / Resiliency Testing experience for distributed applications using tools like Gremlin or AWS FIS Able to interpret technical designs and specifications and design automated solutions accordingly. Capable of working on multiple work streams concurrently in a fast-paced environment with multi-tasking and context switching. Experience on API and AWS tools a plus Interested candidates kindly share the Update CV at pooja.roy@esolgobal.com or WhatsApp your update CV : 7814103214.

Posted 1 month ago

Apply

5.0 - 7.0 years

10 - 18 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

What You'll Do Develop a basic understanding of the product being delivered Develop a strong understanding of the logical architecture and technical design of the applications being supported Work closely with application development, IT Architecture, and other key partners to understand application design and baseline against the defined resiliency principles Create resiliency test scenarios and synthetic test data as per the data distribution and volume requirements Develop resiliency test automation scripts leveraging Gremlin / in-house resiliency test automation framework Contribute to the technical aspects of Delivery Pipeline adoption Track defects to closure, report test results, continuously monitor execution milestones and escalate as required Adhere to all process standards, guidelines, and documented procedures Talents Needed for Success: Minimum of 4.5 years of related experience Bachelor's degree preferred or equivalent experience Basic java / Selenium development skills with significant experience applying those skills in test environments. Chaos Engineering / Resiliency Testing experience for distributed applications using tools like Gremlin or AWS FIS Able to interpret technical designs and specifications and design automated solutions accordingly. Capable of working on multiple work streams concurrently in a fast-paced environment with multi-tasking and context switching. Experience on API and AWS tools a plus Feel free to reach out at Gurpreet.singh@esolglobal.com / 7087000690 (WhatsApp)

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Hyderabad

Work from Office

What you will do Role Description: We are seeking a Senior Data Engineer with expertise in Graph Data technologies to join our data engineering team and contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a strong background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Own the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Strong understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Masters degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelors degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 1 month ago

Apply

7.0 - 10.0 years

9 - 12 Lacs

Hyderabad

Work from Office

Your Responsibilities: Driving the capability building for resiliency testing targeted toward modernization initiatives, common capabilities framework, reference architectures Working closely with application development, IT Architecture, Application Resiliency Foundation and other key partners ensuring end-to-end Application resiliency while upholding ETE policy, procedures and standards Developing and supporting the ETE Resiliency services / work such as the Resiliency Test Scorecard, Failure Mode Analysis, Test Scenarios Improving, setting the direction for the resiliency test automation framework, publishing reusable artifacts to the Developer Marketplace Capture technical requirements, assessing capabilities and mapping to organizational resiliency principles to determine resiliency characteristics of applications. Chip in to strategy discussions and decisions on overall application design and best approach for implementing cloud, and on premises solutions. Focus on continuous improvement practices as the need arises to meet system resiliency imperatives. Define high availability and resilience standards and guidelines for embracing technologies from AWS and other service providers. Mitigates risk by following established procedures and supervising controls, spotting key errors and demonstrating strong ethical behavior. Experience: 7 + Years Location: Hyderabad Additional Information Talents Needed for Success: Minimum of 7 years of related experience Bachelors degree required; Masters preferred and/or equivalent experience Minimum of 3 years experience in testing / architecting and delivering cloud-based solutions Must have expertise with industry patterns, chaos engineering methodologies, and techniques across the disaster recovery subject areas Specialist in highly available architecture and solution implementation Chaos Engineering / Resiliency Testing experience for distributed applications using tools like Gremlin or Cavisson NetHavoc Enterprise Java technologies, tools and system architectures; Splunk and application monitoring tooling such as Dynatrace / AppDynamics

Posted 2 months ago

Apply

12 - 17 years

14 - 19 Lacs

Pune, Bengaluru

Work from Office

Project Role : Application Architect Project Role Description : Provide functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Must have skills : Manufacturing Operations Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : BTech BE Job Title:Industrial Data Architect Summary :We are seeking a highly skilled and experienced Industrial Data Architect with a proven track record in providing functional and/or technical expertise to plan, analyse, define and support the delivery of future functional and technical capabilities for an application or group of applications. Well versed with OT data quality, Data modelling, data governance, data contextualization, database design, and data warehousing. Must have Skills:Domain knowledge in areas of Manufacturing IT OT in one or more of the following verticals Automotive, Discrete Manufacturing, Consumer Packaged Goods, Life ScienceKey Responsibilities: Industrial Data Architect will be responsible for developing and overseeing the industrial data architecture strategies to support advanced data analytics, business intelligence, and machine learning initiatives. This role involves collaborating with various teams to design and implement efficient, scalable, and secure data solutions for industrial operations. Focused on designing, building, and managing the data architecture of industrial systems. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Own the offerings and assets on key components of data supply chain, data governance, curation, data quality and master data management, data integration, data replication, data virtualization. Create scalable and secure data structures, integrating with existing systems and ensuring efficient data flow. Qualifications: Data Modeling and Architecture:oProficiency in data modeling techniques (conceptual, logical, and physical models).oKnowledge of database design principles and normalization.oExperience with data architecture frameworks and methodologies (e.g., TOGAF). Database Technologies:oRelational Databases:Expertise in SQL databases such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server.oNoSQL Databases:Experience with at least one of the NoSQL databases like MongoDB, Cassandra, and Couchbase for handling unstructured data.oGraph Databases:Proficiency with at least one of the graph databases such as Neo4j, Amazon Neptune, or ArangoDB. Understanding of graph data models, including property graphs and RDF (Resource Description Framework).oQuery Languages:Experience with at least one of the query languages like Cypher (Neo4j), SPARQL (RDF), or Gremlin (Apache TinkerPop). Familiarity with ontologies, RDF Schema, and OWL (Web Ontology Language). Exposure to semantic web technologies and standards. Data Integration and ETL (Extract, Transform, Load):oProficiency in ETL tools and processes (e.g., Talend, Informatica, Apache NiFi).oExperience with data integration tools and techniques to consolidate data from various sources. IoT and Industrial Data Systems:oFamiliarity with Industrial Internet of Things (IIoT) platforms and protocols (e.g., MQTT, OPC UA).oExperience with either of IoT data platforms like AWS IoT, Azure IoT Hub, and Google Cloud IoT Core.oExperience working with one or more of Streaming data platforms like Apache Kafka, Amazon Kinesis, Apache FlinkoAbility to design and implement real-time data pipelines. Familiarity with processing frameworks such as Apache Storm, Spark Streaming, or Google Cloud Dataflow.oUnderstanding of event-driven design patterns and practices. Experience with message brokers like RabbitMQ or ActiveMQ.oExposure to the edge computing platforms like AWS IoT Greengrass or Azure IoT Edge AI/ML, GenAI:oExperience working on data readiness for feeding into AI/ML/GenAI applicationsoExposure to machine learning frameworks such as TensorFlow, PyTorch, or Keras. Cloud Platforms:oExperience with cloud data services from at least one of the providers like AWS (Amazon Redshift, AWS Glue), Microsoft Azure (Azure SQL Database, Azure Data Factory), and Google Cloud Platform (BigQuery, Dataflow). Data Warehousing and BI Tools:oExpertise in data warehousing solutions (e.g., Snowflake, Amazon Redshift, Google BigQuery).oProficiency with Business Intelligence (BI) tools such as Tableau, Power BI, and QlikView. Data Governance and Security:oUnderstanding of data governance principles, data quality management, and metadata management.oKnowledge of data security best practices, compliance standards (e.g., GDPR, HIPAA), and data masking techniques. Big Data Technologies:oExperience in big data platforms and tools such as Hadoop, Spark, and Apache Kafka.oUnderstanding of distributed computing and data processing frameworks. Excellent Communication:Superior written and verbal communication skills, with the ability to effectively articulate complex technical concepts to diverse audiences. Problem-Solving Acumen:A passion for tackling intricate challenges and devising elegant solutions. Collaborative Spirit:A track record of successful collaboration with cross-functional teams and stakeholders. Certifications:AWS Certified Data Engineer Associate / Microsoft Certified:Azure Data Engineer Associate / Google Cloud Certified Professional Data Engineer certification is mandatory Minimum of 14-18 years progressive information technology experience. Qualifications BTech BE

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies