Greymatter Innovationz helps you stay digitally relevant across domains, technologies, and skillsets, every day. We are looking for: Telemetry Data Analyst Locations: PAN INDIA Functional Responsibilities - Support building an Enterprise Data Lakehouse focused on observability. - Define relevant data zones and enforce schema contracts for telemetry ingestion. - Strategize leveraging AI/ML components for anomaly detection, predictive insights, and event intelligence. - Determine security procedures for telemetry pipelines and storage layers. - Collaborate cross-functionally with DevOps, SRE, and platform teams to align telemetry strategy. Technical Responsibilities - Contribute to building Data platform using Databricks, ClickHouse, Kafka, AWS Storage, and Amazon Bedrock. - Recognize various tools to enable real-time telemetry streaming, data transformation, and event correlation across diverse toolsets (e.g., Splunk, ServiceNow, Grafana Enterprise). - Optimize telemetry data caching, normalization, and enrichment for actionable insights. Required Skills & Qualifications - 8+ years of experience in data platform architecture, observability, or site reliability engineering. - Deep expertise in telemetry pipeline design, data modeling, and distributed systems. - Hands-on experience with cloud-native platforms (AWS, Azure) and observability stacks (Splunk, AppDynamics, Grafana, ServiceNow ITAM). - Strong understanding of AIOps principles, event intelligence, and cognitive observability. - Proven ability to lead cross-functional teams and drive architectural transformation initiatives. Preferred Attributes - Experience in working for large-scale enterprise clients - Familiarity with governance frameworks and compliance in telemetry data handling. - Knowledge on observability maturity models and telemetry strategy enablement. At Greymatter Innovationz, We offer: Motivating Work Environment. Excellent Work Culture. Help you to upgrade yourself to the next level. And More!!!
Greymatter Innovationz helps you stay digitally relevant across domains, technologies, and skillsets, every day. We are looking for: Azure Databricks Admin Location: Anywhere in India/ Bangalore preferred Duration: FTE Databricks Platform Support and capability enhancement: • Provide technical support for Databricks, addressing issues related to performance, connectivity, and usability. • Monitor and troubleshoot Databricks jobs, clusters, and notebooks to ensure operational efficiency. • Collaborate with data engineering teams to optimize data pipelines and workflows on Databricks • Manage resources related to Databricks, including IAM roles, security groups, and networking configurations • Assist in the deployment and configuration of Databricks environments, ensuring alignment with best practices for security and performance • Conduct training sessions and create documentation to help users effectively leverage Databricks features. • Stay updated on Databricks and Azure developments, providing recommendations for enhancements and new features • Participate in on-call support rotation and respond to urgent platform inridents. Help with the release of new features and capabilities for Databricks. At Greymatter Innovationz, We offer: Motivating Work Environment. Excellent Work Culture. Help you to upgrade yourself to the next level. And More!!!
Greymatter Innovationz helps you stay digitally relevant across domains, technologies, and skillsets, every day. We are looking for: Cloud Automation Engineer - Python & Jenkins Must Location: India - any location / Remote Experience Level: 5+ years in cloud automation/devops. Need Immediate joiners only . Key Responsibilities: Design, develop, and maintain automated CI/CD pipelines using Jenkins (Declarative and Scripted pipelines). Build Python-based automation scripts for provisioning, monitoring, and deployment tasks. Implement Infrastructure-as-Code using Terraform, CloudFormation, or Ansible. Integrate security and quality checks into automated build and release pipelines. Collaborate with cloud architects and developers to deploy scalable solutions across AWS / Azure / GCP environments. Monitor cloud environments and implement auto-healing and performance tuning automation. Drive pipeline governance, audit logging, and environment tagging for compliance. Troubleshoot pipeline failures and build automation dashboards for visibility. Required Skills & Experience: 5+ years in DevOps / Cloud Automation roles. Deep hands-on experience with Jenkins pipeline creation and management. Strong Python scripting skills for infrastructure and DevOps use cases. Strong knowledge of Terraform, CloudFormation, or similar IaC tools. Hands-on experience with AWS or Azure Familiarity with containerization (Docker) and orchestration (Kubernetes). Proficient in Git and version control workflows (e.g., GitHub Actions, GitLab CI). Preferred Qualifications: Certification in AWS (DevOps Engineer), Azure (DevOps Engineer Expert), or GCP equivalent. Experience with monitoring tools (e.g., Prometheus, Grafana, CloudWatch). Familiarity with secrets management and secure CI/CD practices (Vault, SOPS, etc.). Prior experience automating disaster recovery and high availability infrastructure. At Greymatter Innovationz, We offer: Motivating Work Environment. Excellent Work Culture. Help you to upgrade yourself to the next level. And More!!!
Greymatter Innovationz helps you stay digitally relevant across domains, technologies, and skillsets, every day. We are looking for: Lead AWS Cloud Engineer - Databricks Platform Location: Princeton, NJ Duration: Contract Or FTE Shift Time: EST Time Zone Position Overview: We are seeking a highly skilled and experienced AWS Cloud Engineer with a strong background in Databricks to join our dynamic cloud engineering team. The ideal candidate will have extensive experience in designing, deploying, and managing cloud infrastructure on AWS, along with in-depth expertise in the Databricks platform for data engineering, analytics, and machine learning workloads. The candidate should also have prior leadership experience, as they will be responsible for guiding a team of engineers in cloud-related initiatives, best practices, and ensuring the successful delivery of data-driven solutions. Key Responsibilities: Cloud Infrastructure Design & Implementation: Lead the design, implementation, and optimization of scalable and secure AWS cloud solutions. Utilize AWS services such as EC2, S3, Lambda, RDS, VPC, IAM, and others for building robust cloud-based architectures. Work closely with the architecture team to define infrastructure requirements and recommend suitable AWS technologies. Databricks Platform Expertise: Leverage Databricks for building and managing data pipelines, running Spark-based analytics workloads, and orchestrating machine learning models. Design, optimize, and maintain Databricks clusters and notebooks for data processing, transformation, and analysis. Integrate Databricks with other AWS services (e.g., S3, Redshift, Glue) to build comprehensive end-to-end data solutions. Provide leadership in best practices for working with Databricks to ensure reliability, scalability, and efficiency of data workflows. Leadership & Mentoring: Lead a team of cloud engineers, guiding them in implementing solutions that align with organizational goals and industry best practices. Mentor junior team members and provide technical guidance on cloud-related projects, Databricks integration, and infrastructure optimization. Facilitate cross-functional collaboration between engineering, data science, and business teams. Automation & Optimization: Automate cloud infrastructure provisioning and management using Infrastructure-as-Code tools (e.g., Terraform, CloudFormation). Continuously monitor and optimize AWS resources to improve performance and reduce costs. Troubleshooting & Support: Provide expert-level troubleshooting and support for cloud infrastructure, Databricks platform issues, and production systems. Identify and resolve performance bottlenecks, security risks, and system inefficiencies across AWS and Databricks environments. Documentation & Reporting: Document cloud infrastructure, Databricks configurations, and solutions for internal knowledge sharing. Prepare regular reports on system performance, cost management, and the status of cloud initiatives. Key Requirements: Experience: 5+ years of experience in AWS cloud engineering, with a proven track record of delivering cloud-based solutions. 3+ years of hands-on experience with Databricks platform, including building and managing data pipelines, notebooks, and Spark-based workloads. 2+ years of experience leading or mentoring a team of engineers, with strong leadership and team-building skills. Technical Skills: Proficient in AWS cloud services (EC2, S3, Lambda, RDS, VPC, IAM, CloudWatch, etc.). Strong knowledge of Databricks, Apache Spark, and data engineering best practices. Experience with containerization and orchestration tools (Docker, Kubernetes). Familiar with Infrastructure-as-Code tools (Terraform, CloudFormation). Experience with data processing frameworks, data lakes, and analytics pipelines. Additional Skills: Strong understanding of security best practices in cloud environments. Ability to manage complex cloud projects with multiple stakeholders. Excellent communication skills, with the ability to explain technical concepts to non-technical stakeholders. Experience with Agile methodologies is a plus. Education & Certifications: Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field (or equivalent experience). AWS Certified Solutions Architect – Professional (preferred). Databricks Certified Associate Developer (preferred, but not required). At Greymatter Innovationz, We offer: Motivating Work Environment. Excellent Work Culture. Help you to upgrade yourself to the next level. And More!!!