Experience in Databricks for building ETL pipelines. Experience in building data model and optimizing queries. Experience with data management and reporting processes. Experience with medallion architecture and data governance. Build, test, and maintain scalable ETL pipelines using Databricks and Apache Spark. Contribute to the development of data models and optimize queries for performance and efficiency. Participate in the design and implementation of data governance practices, ensuring data quality and consistency across pipelines. Assist in the implementation and maintenance of the medallion architecture (bronze, silver, gold layers) to streamline data processing and reporting. Collaborate with cross-functional teams to integrate data management and reporting processes into broader business initiatives. Troubleshoot and optimize Databricks workflows to enhance performance and reduce costs. Maintain clear documentation of ETL pipelines, data models, and governance procedures to ensure transparency and scalability.
We are seeking a highly skilled DevOps Architect with expertise in OpenShift and Rancher Kubernetes Engine to design, implement, and optimize cloud-native infrastructure. The ideal candidate will have extensive experience in Kubernetes orchestration, containerization, CI/CD pipelines, and cloud automation to drive scalable and resilient deployments. Key Responsibilities: Architect and Implement: Design and deploy scalable, high-availability Kubernetes clusters using OpenShift and Rancher Kubernetes Engine (RKE) . Automation & Orchestration: Develop and manage infrastructure-as-code (IaC) solutions using Terraform, Helm, or Ansible. CI/CD Integration: Implement and optimize CI/CD pipelines using Jenkins, GitLab CI/CD, ArgoCD, or Tekton for automated deployment and testing. Security & Compliance: Enforce security best practices for Kubernetes clusters, RBAC policies, service mesh configurations, and container image scanning. Monitoring & Logging: Set up observability solutions using Prometheus, Grafana, ELK/EFK Stack, or OpenTelemetry for proactive monitoring and alerting. Multi-Cloud & Hybrid Cloud Deployments: Design hybrid cloud and multi-cloud strategies using AWS, Azure, GCP, or on-prem solutions integrated with OpenShift and Rancher. SRE & Performance Optimization: Implement SRE best practices for high availability, auto-scaling, and performance tuning of microservices architectures. Collaboration: Work closely with development, security, and operations teams to streamline DevOps processes and enable faster deployments. Disaster Recovery & Backup: Implement disaster recovery strategies , backup automation, and cluster failover solutions. Required Skills & Experience: Kubernetes & Containerization: Deep understanding of Kubernetes orchestration, OpenShift, and Rancher Kubernetes Engine (RKE2/RKE) . Containerization & Service Mesh: Experience with Docker, Istio, Linkerd, or Envoy. Infrastructure as Code (IaC): Hands-on expertise with Terraform, Helm, and Ansible. CI/CD Pipelines: Strong knowledge of Jenkins, GitOps (ArgoCD, FluxCD), and Tekton. Cloud Platforms: Experience with AWS, Azure, GCP, and on-premises Kubernetes clusters. Monitoring & Logging: Experience with Prometheus, Grafana, ELK/EFK Stack, OpenTelemetry. Security & Compliance: Kubernetes RBAC, Pod Security Policies, image scanning, and network policies. Scripting & Automation: Proficiency in Bash, Python, or Go for automation and scripting. Networking & Load Balancing: Expertise in Kubernetes networking, Ingress controllers (NGINX, Traefik), and service discovery. Backup & DR: Experience with Velero, Longhorn, or Kasten for Kubernetes backup and recovery.
We are looking for a seasoned Senior Full Stack Developer who excels at both backend and frontend development and can lead projects from concept to delivery. In this role, you will architect, develop, and maintain robust and scalable web applications using Java (with Spring Boot and Microservices) and React. You will work closely with cross-functional teams to ensure technical excellence, drive innovation, and mentor junior developers. Key Responsibilities: Full-Stack Development: Architect, develop, and maintain robust Java-based backend systems (using Spring Boot, Microservices) and dynamic, high-performance React applications. Mentorship: Guide and mentor junior developers, conducting code reviews and ensuring adherence to best practices in software design and development. Integration & API Development: Design and implement secure and efficient RESTful APIs for seamless front-end and back-end integration. Performance Optimization: Analyze, optimize, and troubleshoot application performance issues, ensuring smooth, scalable operations. CI/CD & DevOps: Collaborate with the DevOps team to implement CI/CD pipelines, containerization (Docker, Kubernetes), and automated testing strategies. Code Quality & Testing: Ensure high code quality through robust unit, integration, and end-to-end testing practices. Advocate for test-driven development (TDD) where appropriate. Continuous Learning: Stay updated with the latest trends and technologies in Java, React, and cloud platforms to drive continuous improvement and innovation. Required Skills: Java Expertise: Extensive experience with Java, Spring Boot, and Microservices architecture. Frontend Proficiency: Advanced skills in React.js, along with a strong command of JavaScript, TypeScript, HTML5, and CSS3. API Development: Proficiency in designing and consuming RESTful APIs and familiarity with WebSocket implementations. Database Knowledge: Experience with both SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB). DevOps Familiarity: Hands-on experience with CI/CD tools, Docker, Kubernetes, and cloud platforms (AWS, Azure, GCP). Testing & QA: Strong knowledge of unit testing frameworks (JUnit, Jest, Mocha, Cypress) and automated testing practices. Agile Methodologies: Proven experience working in Agile development environments, utilizing version control systems (Git, GitHub, Bitbucket). Problem Solving: Exceptional analytical and debugging skills with a proactive attitude toward addressing challenges. Preferred Skills: Architectural Leadership: Experience in designing system architectures for large-scale applications. Microfrontend Architecture: Familiarity with modern frontend architectural patterns. GraphQL Experience: Knowledge of GraphQL for building flexible and efficient APIs. Mentorship & Leadership: Prior experience in leading teams or managing projects in a senior role. Communication: Excellent verbal and written communication skills with the ability to articulate complex technical concepts to non-technical stakeholders.
Design and implement robust, scalable ETL/ELT pipelines using AWS-native tools Ingest and transform data from multiple sources into S3 , applying schema discovery via AWS Glue Crawlers Develop and orchestrate workflows using Apache Airflow , AWS Step Functions , and Lambda functions Build and optimize data models in Amazon Redshift for analytics consumption Manage and enforce IAM-based access control , ensuring secure data practices Write clean, modular, and reusable code in PySpark and SQL for large-scale data processing Implement monitoring, alerting, and CI/CD pipelines to improve deployment efficiency and reliability Work closely with business stakeholders and analysts to understand data requirements and deliver meaningful insights Participate in code reviews and knowledge-sharing activities across teams. Understands scrum and comfortable working in an Agile environment. Required Skills 4+ years of experience as a Data Engineer, with at least 3+ years working in cloud-native environments (preferably AWS ) Hands-on experience with S3 , Redshift , Glue (ETL & Crawlers) , Lambda , Step Functions , and Airflow Strong programming skills in PySpark and SQL Experience designing and implementing data lakes , data warehouses , and real-time/near-real-time pipelines Familiarity with DevOps , CI/CD pipelines , and infrastructure as code tools (e.g., Git, CloudFormation, Terraform) Understanding of data governance , data security , and role-based access control in cloud environments Strong problem-solving skills and ability to work independently as well as collaboratively Excellent written and verbal communication skills Nice to Have Experience working in domains such as nonprofit, healthcare, or campaign marketing Familiarity with AWS Notebooks, Athena, and CloudWatch Exposure to data observability tools, testing frameworks, or event-driven architectures Experience mentoring junior engineers or leading small teams
About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Roles and responsibilities Perform Black Box and Gray Box testing on web applications, ensuring coverage of all system functionalities. Assess acceptance criteria by identifying and reporting defects, variations, and discrepancies between development deliverables and the defined user stories. Publish detailed test reports and maintain documentation of all testing activities, including test plans, test cases, and defect reports, to guide the team in decision-making. Actively engage in Scrum ceremonies, collaborating with Product Owners, Agile Coaches, and development teams to prioritize issues, identify risk areas, and resolve blockers. Perform post-release testing in production environments, working with IT and operations to ensure quality standards are met. Stay updated with industry best practices for testing, identifying areas for improvement and striving for higher efficiency and quality. Use testing tools like Selenium, Java, Jira, TestRail, and databases with SQL to support testing activities, with a focus on Agile methodologies and frameworks. Desirable to be able to define, execute, and automate regression tests to maintain system stability across multiple iterations, minimizing the risk of defects through automation tools like Selenium and Cypress. Why work for Material In addition to fulfilling, high-impact work, company culture and benefits are integral to determining if a job is a right fit for you. Here s a bit about who we are and highlights around What we offer. Who We Are & What We Care About:- Material is a global company and we work with best-of-class brands worldwide. We also create and launch new brands and products, putting innovation and value creation at the center of our practice. Our clients are in the top of their class, across industry sectors from technology to retail, transportation, finance and healthcare. Material employees join a peer group of exceptionally talented colleagues across the company, the country, and even the world. We develop capabilities, craft and leading-edge market offerings across seven global practices including strategy and insights, design, data & analytics, technology and tracking. Our engagement management team makes it all hum for clients. We prize inclusion and interconnectedness. We amplify our impact through the people, perspectives, and expertise we engage in our work. Our commitment to deep human understanding combined with a science & systems approach uniquely equips us to bring a rich frame of reference to our work. A community focused on learning and making an impact. Material is an outcomes focused company. We create experiences that matter, create new value and make a difference in peoples lives. What We Offer:- Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work (Certified). Health and Family Insurance. 40+ Leaves per year along with maternity & paternity leaves. Wellness, meditation and Counselling sessions.
About the Role: We are looking for a skilled Full Stack Data Engineer to join our growing data platform team. You will be responsible for designing, building, and maintaining scalable data pipelines and APIs that power internal tools and analytics, with deep integration across AWS services and DevOps best practices. This is a cross-functional role that requires a strong foundation in backend engineering, cloud-native data workflows, and API development. You will work closely with data scientists, analysts, and product stakeholders to build robust, secure, and reliable data systems. Key Responsibilities: Design and build scalable ETL/ELT pipelines using AWS-native tools (Glue, Lambda, S3, DMS, Redshift, etc.) Develop high-performance RESTful APIs using FastAPI , enabling secure and efficient access to data assets Architect data platforms and microservices for ingestion, transformation, access control, and monitoring Implement authentication & authorization mechanisms using OAuth2/JWT Manage infrastructure and deployment using CI/CD pipelines (GitHub Actions, CodePipeline, etc.) Write modular, testable, and well-documented Python code across backend and data workflows Monitor, debug, and tune performance across cloud services and APIs Collaborate with DevOps and Security teams to enforce best practices for data security, access controls, and compliance Contribute to a culture of technical excellence, knowledge sharing, and continuous improvement Must-Have Skills: Strong experience with AWS data stack : S3, Glue, Athena, Redshift, Lambda, DMS, IAM, CloudWatch Proficiency in Python , especially with frameworks like FastAPI and Pandas Hands-on experience building and deploying RESTful APIs with JWT-based auth Experience building data ingestion/transformation pipelines (structured/unstructured data) Expertise in designing CI/CD workflows for automated testing, deployment, and rollback Knowledge of SQL and performance tuning for cloud data warehouses Familiarity with containerization & infrastructure as code tools like Docker & CDK Version control experience with Git , and Agile/Scrum methodologies Good-to-Have: Experience with orchestration tools like Airflow , Prefect, or AWS Step Functions Experience with Terraform for infrastructure as code Exposure to data quality, observability, or lineage tools Understanding of data security and compliance (GDPR, HIPAA, etc.) Familiarity with ML model deployment pipelines is a plus
About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Key Responsibilities: Develop and maintain scalable web applications using Python, for backend. Work with databases such as Postgres and MongoDB to design and manage robust data structures. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the performance, quality, and responsiveness of applications. Identify and fix bottlenecks and bugs. Should have experience in Ci/CD Pipelines Have experience in JavaScript any framework. Key Requirements: React: Extensive experience in building complex frontend applications. (Good to have) Must to Have: Strong e xperience with Python and Golang basics. Experience in Microservices and bit of docker Experience with databases like Postgres and MongoDB. Ability to work independently and as part of a team. Excellent problem-solving skills and attention to detail. What We Offer Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified). Health and Family Insurance. 40+ Leaves per year along with maternity C paternity leaves. Wellness, meditation and Counselling sessions.
About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Position: Technical Lead DevOps Qualifications: Minimum 4+ years of industry experience with strong knowledge of Software Engineering best practices. Proven expertise in designing, implementing, and managing DevOps processes, tools, and infrastructure. Technical Skills & Experience: Must-Have: Programming Languages: Python, Golang, Ruby, JavaScript (experience with any of these). Infrastructure as Code (IaC): Terraform, Ansible (proficiency in at least one). CI/CD Tooling: GitHub Actions, CircleCI, Jenkins, GitLab (hands-on experience with one or more). Cloud Platforms: AWS, Azure, or GCP (extensive hands-on experience with at least one major provider). Containerization & Orchestration: Kubernetes, Docker, Helm. Operating Systems: Linux and Windows (administration and troubleshooting). Networking: Load balancing, network security, standard network protocols. Nice-to-Have: Monitoring & Analytics Tools: Dynatrace, Splunk, Elastic, Prometheus. Container Orchestration Enhancements: RBAC, network policies, service meshes, Cluster API. Project Coordination: Experience managing timelines, priorities, and cross-functional communication. Agile Methodologies: Prior experience working in Agile teams. Key Responsibilities: Lead and guide DevOps initiatives, ensuring best practices are followed across the development and deployment lifecycle. Design, implement, and maintain scalable CI/CD pipelines and infrastructure. Drive automation for cloud infrastructure, deployments, and monitoring systems. Collaborate with development teams, architects, and other stakeholders to align on technical goals. Troubleshoot and resolve complex infrastructure and deployment issues. Mentor team members, fostering continuous learning and technical excellence. What We Expect From You: A growth mindset with a strong willingness to learn emerging technologies in cloud, orchestration, networking, and operating systems. Deep focus on AWS services, with the ability to build and optimize scalable infrastructure. Commitment to knowledge-sharing and skill development within the team. Ability to take ownership, work independently, and deliver results in a fast-paced environment.
About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners Job Title: Senior/Lead Data Scientist Experience Required: 4 + Years About the Role: We are seeking a skilled and innovative Machine Learning Engineer with 4+ years of experience to join our AI/ML team. The ideal candidate will have strong expertise in Computer Vision , Generative AI (GenAI) , and Deep Learning , with a proven track record of deploying models in production environments using Python, MLOps best practices, and cloud platforms like Azure ML. Key Responsibilities: Design, develop, and deploy AI/ML models for Computer Vision and GenAI use cases Build, fine-tune, and evaluate deep learning architectures (CNNs, Transformers, Diffusion models, etc.) Collaborate with product and engineering teams to integrate models into scalable pipelines and applications Manage the complete ML lifecycle using MLOps practices (versioning, CI/CD, monitoring, retraining) Develop reusable Python modules and maintain high-quality, production-grade ML code Work with Azure Machine Learning Services for training, inference, and model management Analyze large-scale datasets, extract insights, and prepare them for model training and validation Document technical designs, experiments, and decision-making processes Required Skills & Experience: 4 5 years of hands-on experience in Machine Learning and Deep Learning Strong experience in Computer Vision tasks such as object detection, image segmentation, OCR, etc. Practical knowledge and implementation experience in Generative AI (LLMs, diffusion models, embeddings) Solid programming skills in Python , with experience using frameworks like PyTorch , TensorFlow , OpenCV , Transformers (HuggingFace) , etc. Good understanding of MLOps concepts , model deployment, and lifecycle management Experience with cloud platforms , preferably Azure ML , for scalable model training and deployment Familiarity with data labeling tools, synthetic data generation, and model interpretability Strong problem-solving, debugging, and communication skills Good to Have: Experience with NLP , multimodal learning , or 3D computer vision Familiarity with containerization tools (Docker, Kubernetes) Experience in building end-to-end ML pipelines using MLflow, DVC, or similar tools Exposure to CI/CD pipelines for ML projects and working in agile development environments Education: Bachelor s or Master s degree in Computer Science, Electrical Engineering, Data Science , or a related field
About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners Job Summary: We are seeking a Senior Data Engineer Databricks with a strong development background in Azure Databricks and Python, who will be instrumental in building and optimising scalable data pipelines and solutions across the Azure ecosystem. This role requires hands-on development experience with PySpark , data modelling, and Azure Data Factory. You will collaborate closely with data architects, analysts, and business stakeholders to ensure reliable and high-performance data solutions. Experience Required: 4+ Years Senior Data Engineer (Microsoft Azure, Databricks, Data Factory, Data Engineer, Data Modelling) Key Responsibilities: Develop and Maintain Data Pipelines: Design, implement, and optimise scalable data pipelines using Azure Databricks (PySpark) for both batch and streaming use cases. Azure Platform Integration: Work extensively with Azure services including Data Factory , ADLS Gen2 , Delta Lake , and Azure Synapse for end-to-end data pipeline orchestration and storage. Data Transformation & Processing: Write efficient, maintainable, and reusable PySpark code for data ingestion, transformation, and validation processes within the Databricks environment. Collaboration: Partner with data architects, analysts, and data scientists to understand requirements and deliver robust, high-quality data solutions. Performance Tuning and Optimisation: Optimise Databricks cluster configurations, notebook performance, and resource consumption to ensure cost-effective and efficient data processing. Testing and Documentation: Implement unit and integration tests for data pipelines. Document solutions, processes, and best practices to enable team growth and maintainability. Security and Compliance: Ensure data governance, privacy, and compliance are upheld across all engineered solutions, following Azure security best practices. Preferred Skills : Strong hands-on experience with Delta Lake , including table management, schema evolution, and implementing ACID-compliant pipelines. Skilled in developing and maintaining Databricks notebooks and jobs for large-scale batch and streaming data processing. Experience writing modular, production-grade PySpark and Python code , including reusable functions and libraries for data transformation. Experience in streaming data ingestion and Structured Streaming in Databricks for near real-time data solutions. Knowledge of performance tuning techniques in Spark including job optimization, caching, and partitioning strategies. Exposure to data quality frameworks and testing practices (e.g., pytest , data validation libraries, custom assertions). Basic understanding of Unity Catalog for managing data governance, access controls, and lineage tracking from a developer s perspective. Familiarity with Power BI - able to structure data models and views in Databricks or Synapse to support BI consumption .
About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Role- Senior/Lead Fullstack Developer Experience 4 - 9 years Job Description: Software Developer Retail Pricing Solution GoLang is Good to have (Python / React / TypeScript) Contribute to the development and maintenance of a Java-based pricing solution in the retail domain, focusing on building scalable front-end and back-end components. Work with technologies such as GoLang, Python, React, and TypeScript to develop, enhance, and debug application features under the guidance of senior engineers. Collaborate closely with cross-functional Agile teams including QA, Product, and Business Analysts to understand requirements and deliver quality code. Participate in code reviews, unit testing, and troubleshooting to ensure performance, security, and reliability of the solution. Continuously learn and adapt to evolving technologies and coding standards within a distributed team environment. Good understanding of software development best practices and a strong willingness to grow into a full-stack or specialized role over time. What We Offer Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified). Health and Family Insurance. 40+ Leaves per year along with maternity C paternity leaves. Wellness, meditation and Counselling sessions.
About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Role- Senior/Lead Full Stack Developer - Python and React Experience 4+ years Senior/Lead Software Developer Design, develop, and maintain robust, scalable components using Python , React , and TypeScript . Take ownership of full-stack development tasks, from backend services to modern front-end interfaces, ensuring high performance, security, and code quality. Collaborate with Product Owners, Business Analysts, and cross-functional Agile teams to translate complex business requirements into technical solutions. Mentor junior developers, conduct code reviews, and enforce best practices in coding, testing, CI/CD, and architectural design. Troubleshoot performance issues, identify areas for improvement, and contribute to system optimization and refactoring efforts. Stay current with industry trends and bring innovative ideas to enhance the products scalability, maintainability, and user experience.
Job Summary: We are looking for a detail-oriented and highly skilled Senior QA Engineer to join our team. The ideal candidate will have a strong background in Manual and Automation Testing , hands-on experience with Python Playwright for API automation , and experience with tools like Mongo DB, Splunk and HTML test reporting. Knowledge of Gen AI is a plus. Years of Experience: 5 to 9 years of hands-on experience in test automation Proven experience in test-automation of web and API testing Core Responsibilities to be done on this project: Design, develop, and execute manual and automated test cases based on product requirements. Perform API automation testing using Python + Playwright. Create and manage HTML test reports to communicate test results effectively. Work closely with developers and product teams to define and implement quality standards. Create, maintain, and update test cases, test plans, and bug reports. Utilize MongoDB for data validation and test data setup. Monitor and analyze logs using Splunk to identify and troubleshoot issues. Integrate automated tests with CI/CD pipelines. Leverage knowledge of Generative AI tools and techniques to improve testing and productivity (preferred). Participate in Agile ceremonies (daily standups, sprint planning, retrospectives) Required Skills & Qualifications: 5+ years of experience in QA (Manual + Automation). Strong hands-on experience with Python + Playwright for automation. Experience in building, maintaining, and executing a custom automation testing framework using Python (Playwright + Pytest). Proficiency in API automation Testing. Experience in HTML report generation (pytest-html). Familiarity with MongoDB for test data validation. Exposure to log monitoring tools like Splunk. Strong skills in test case design, execution, and defect tracking using tools like Jira. Understanding of SDLC, STLC, and Agile methodologies. Foundational knowledge of Generative AI tools (preferred). Experience with CI/CD pipelines (e.g., Jenkins, GitHub Actions)
About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Why work for Material In addition to fulfilling, high-impact work, company culture and benefits are integral to determining if a job is a right fit for you. Here s a bit about who we are and highlights around What we offer. Who We Are & What We Care About:- Material is a global company and we work with best-of-class brands worldwide. We also create and launch new brands and products, putting innovation and value creation at the center of our practice. Our clients are in the top of their class, across industry sectors from technology to retail, transportation, finance and healthcare. Material employees join a peer group of exceptionally talented colleagues across the company, the country, and even the world. We develop capabilities, craft and leading-edge market offerings across seven global practices including strategy and insights, design, data & analytics, technology and tracking. Our engagement management team makes it all hum for clients. We prize inclusion and interconnectedness. We amplify our impact through the people, perspectives, and expertise we engage in our work. Our commitment to deep human understanding combined with a science & systems approach uniquely equips us to bring a rich frame of reference to our work. A community focused on learning and making an impact. Material is an outcomes focused company. We create experiences that matter, create new value and make a difference in peoples lives. What We Offer:- Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work (Certified). Health and Family Insurance. 40+ Leaves per year along with maternity & paternity leaves. Wellness, meditation and Counselling sessions. Skills- Java or React.js OR React + Node + Typescript Or Java+ Reacj.js Database: Any Relational + NoSQL DB Core: TDD, Unit Tests, Integration Tests, Knowledge around Git, Docker, CI/CD. Job Description Bachelor s degree in Computer Science, Information Technology, or related field or equivalent experience 5+ years of professional software development experience Proficiency in programming languages such as Javascript, and Java and ability to pick up another tech stack Deep understanding of web technologies, APIs and integrations, databases, and cloud services Advanced Knowledge of web technologies and frameworks, e.g. React, NextJS, NodeJS with Typescript, Springboot Knowledge and experience with database technologies including relational and NoSQL databases and experience with AWS or other cloud-managed persistence, e.g. AWS RDS, S3, DocumentDB Experience with version control systems, e.g. Git, and CI/CD pipelines Understanding of cloud services and architectures Familiarity with Agile methodologies such as Scrum and Kanban, and practices such as Test-Driven-Development (TDD), Behaviour-Driven-Development (BDD), Domain-Driven-Design (DDD) Experience with testing frameworks and tools Strong understanding of security best practices and compliance requirements Experience with accessibility standards, e.g. WCAG AA Excellent problem-solving and debugging skills Strong knowledge of CI/CD pipelines, DevOps and platform engineering practices, and Agile methodologies
Job Responsibilities: Design and Develop Data Pipelines: Develop and optimise scalable data pipelines using Microsoft Fabric , including Fabric Notebooks , Dataflows Gen2 , Data Pipelines , and Lakehouse architecture . Work on both batch and real-time ingestion and transformation. Integrate with Azure Data Factory or Fabric-native orchestration for smooth data flow. Fabric Data Platform Implementation: Collaborate with data architects and engineers to implement governed Lakehouse models in Microsoft Fabric (OneLake) . Ensure data solutions are performant, reusable, and aligned with business needs and compliance standards. Data Pipeline Optimisation: Monitor and improve performance of data pipelines and notebooks in Microsoft Fabric. Apply tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery across domains. Collaboration with Cross-functional Teams: Work closely with BI developers, analysts, and data scientists to gather requirements and build high-quality datasets. Support self-service BI initiatives by developing well-structured datasets and semantic models in Fabric. Documentation and Reusability: Document pipeline logic, lakehouse architecture, and semantic layers clearly. Follow development standards and contribute to internal best practices for Microsoft Fabric-based solutions. Microsoft Fabric Platform Execution: Use your experience with Lakehouses , Notebooks , Data Pipelines , and Direct Lake in Microsoft Fabric to deliver reliable, secure, and efficient data solutions that integrate with Power BI , Azure Synapse , and other Microsoft services. Required Skills and Qualifications: 5+ years of experience in data engineering within the Azure ecosystem , with relevant hands-on experience in Microsoft Fabric , including Lakehouse , Dataflows Gen2 , and Data Pipelines . Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 . Solid experience with data ingestion , ELT/ETL development , and data transformation across structured and semi-structured sources. Strong understanding of OneLake architecture and modern data lakehouse patterns . Strong command of SQL,Pyspark, Python applied to both data integration and analytical workloads. Ability to collaborate with cross-functional teams and translate data requirements into scalable engineering solutions. Experience in optimising pipelines and managing compute resources for cost-effective data processing in Azure/Fabric. Preferred Skills: Experience working in the Microsoft Fabric ecosystem , including Direct Lake , BI integration , and Fabric-native orchestration features. Familiarity with OneLake , Delta Lake , and Lakehouse principles in the context of Microsoft s modern data platform. expert knowledge of PySpark , strong SQL , and Python scripting within Microsoft Fabric or Databricks notebooks. Understanding of Microsoft Purview or Unity Catalog , or Fabric-native tools for metadata , lineage , and access control . Exposure to DevOps practices for Fabric and Power BI, including Git integration , deployment pipelines, and workspace governance. Knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines is a plus.
About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Experience Range: 6-10 Years Role: Fullstack Technical Lead Key Responsibilities: Develop and maintain scalable web applications using React for frontend and Python (fast API/Flask/Django) for backend. Work with databases such as SQL, Postgres and MongoDB to design and manage robust data structures. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the performance, quality, and responsiveness of applications. Identify and fix bottlenecks and bugs. Others: AWS, Snowflake, Azure, JIRA, CI/CD pipelines Key Requirements: React: Extensive experience in building complex frontend applications. Must to Have: Experience with Python (FAST API/ FLASK/ DJANGO). Required cloud experience AWS OR Azure Experience with databases like SQL Postgres and MongoDB. Basic understanding of Data Fabric Good to have Ability to work independently and as part of a team. Excellent problem-solving skills and attention to detail. What We Offer Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified). Health and Family Insurance. 40+ Leaves per year along with maternity & paternity leaves. Wellness, meditation and Counselling sessions.
About US: - We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe JD Senior Technical Architect DevOps Technical Skills: - You can translate logical designs into physical designs. You can produce detailed designs and document all work using required standards, methods, and tools, including prototyping tools where appropriate. You can design systems characterized by managed levels of risk, manageable business, and technical complexity, and meaningful impact. You know how to work with well-understood technology and identify appropriate patterns. You can manage the availability of different service components to ensure they meet business needs and performance targets. You can troubleshoot and identify problems across systems: including computing, storage, networking, physical infrastructure, software, COTS and open-source packages and solutions, and virtual and cloud computing, including IaaS, PaaS, SaaS. Automation mindset is appreciated to manage mundane or repetitive tasks. You are aware of Information Security at large Networks, Docker Security, K8S Security. You can analyze current processes, identify and implement opportunities to optimize processes, and lead and develop a team of experts to deliver service improvements. You can help to evaluate and establish requirements for the implementation of changes by setting policy and standards. Responsibilities: Lead the migration of services from Nomad and Consul to Amazon ECS (Fargate or EC2 launch type). Redesign service discovery and networking patterns in the AWS ecosystem. Manage and evolve infrastructure using Terraform (modular, reusable code). Collaborate with development teams to containerize applications and adapt them to ECS architecture. Optimize and maintain GitLab CI/CD pipelines for consistent build, test, and deploy workflows. Integrate ECS with AWS-native services (e.g., CloudWatch, IAM, ALB, Route53, EFS, RDS). Conduct performance testing and tuning of migrated services. Implement observability best practices for the new environment (logging, tracing, metrics). Ensure security, reliability, and scalability of the new platform. Requirements: 7+ years of experience in DevOps, SRE, or Platform Engineering roles. Experience in Docker Deep knowledge of AWS ECS (Fargate/EC2), networking, IAM, and container lifecycle management. Expertise with Terraform for infrastructure automation (workspaces, modules, state backends). Proficiency with GitLab CI/CD, including runners, pipelines, and environment deployments. Strong understanding of microservices, container orchestration, and cloud-native patterns. Experience with service mesh, load balancing, and traffic routing. Solid grasp of system monitoring, logging, and alerting (CloudWatch, Prometheus, ELK, etc.). Familiarity with Docker best practices and security hardening. Good communication skills and experience working with cross-functional teams. Who We Are & What We Care About Material is a global company and we work with best-of-class brands worldwide. We also create and launch new brands and products, putting innovation and value creation at the center of our practice. Our clients are in the top of their class, across industry sectors from technology to retail, transportation, finance and healthcare. Material employees join a peer group of exceptionally talented colleagues across the company, the country, and even the world. We develop capabilities, craft and leading-edge market offerings across seven global practices including strategy and insights, design, data & analytics, technology and tracking. Our engagement management team makes it all hum for clients. We prize inclusion and interconnectedness. We amplify our impact through the people, perspectives, and expertise we engage in our work. Our commitment to deep human understanding combined with a science & systems approach uniquely equips us to bring a rich frame of reference to our work. A community focused on learning and making an impact. Material is an outcomes focused company. We create experiences that matter, create new value and make a difference in peoples lives. What We Offer Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified). Health and Family Insurance. 40+ Leaves per year along with maternity & paternity leaves. Wellness, meditation and Counselling sessions.
About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Job Description Databricks Architect Key Responsibilities: Lead the architecture, design, and implementation of end-to-end data solutions on the Databricks Lakehouse Platform . Build and optimize scalable data pipelines for structured and unstructured data using Apache Spark on Databricks. Define and enforce data engineering and architectural best practices. Collaborate with data scientists, analysts, and business stakeholders to understand requirements and translate them into scalable solutions. Architect and lead the design of CI/CD pipelines and automation for Databricks Pipelines. Ensure data governance, security, and compliance in all solutions. Provide technical leadership and mentorship to data engineering teams. Evaluate new tools and technologies to continuously improve platform efficiency and performance. Work with Azure/AWS/GCP cloud environments integrated with Databricks. Required Skills & Qualifications: 8+ years of hands-on experience with Databricks , including Spark, Delta Lake, MLflow, and DBSQL. Strong experience in building large-scale ETL/ELT data pipelines . Solid knowledge of data modeling , data warehousing , and distributed computing . Experience with cloud platforms (Azure, AWS, or GCP), preferably Azure. Proven track record of implementing best practices in data engineering and architecture. Proficiency in Python , SQL , and Spark-based processing. Strong understanding of DevOps practices in the context of Databricks (CI/CD, infrastructure as code). Excellent communication and stakeholder management skills. Good to Have: Experience or working knowledge of Microsoft Fabric . Familiarity with Power BI , Synapse Analytics , or other modern BI tools. Certifications related to Databricks or Azure (e.g., Databricks Certified Data Engineer , Microsoft Certified: Azure Solutions Architect )
About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Position: Technical Architect DevOps Qualifications: Minimum 8+ years of industry experience with strong knowledge of Software Engineering best practices. Proven expertise in designing, implementing, and managing DevOps processes, tools, and infrastructure. Technical Skills & Experience: Must-Have: Programming Languages: Python, Golang, Ruby, JavaScript (experience with any of these). Infrastructure as Code (IaC): Terraform, Ansible (proficiency in at least one). CI/CD Tooling: GitHub Actions, CircleCI, Jenkins, GitLab (hands-on experience with one or more). Cloud Platforms: AWS, Azure, or GCP (extensive hands-on experience with at least one major provider). Containerization & Orchestration: Kubernetes, Docker, Helm. Operating Systems: Linux and Windows (administration and troubleshooting). Networking: Load balancing, network security, standard network protocols. Nice-to-Have: Monitoring & Analytics Tools: Dynatrace, Splunk, Elastic, Prometheus. Container Orchestration Enhancements: RBAC, network policies, service meshes, Cluster API. Project Coordination: Experience managing timelines, priorities, and cross-functional communication. Agile Methodologies: Prior experience working in Agile teams. Key Responsibilities: Lead and guide DevOps initiatives, ensuring best practices are followed across the development and deployment lifecycle. Design, implement, and maintain scalable CI/CD pipelines and infrastructure. Drive automation for cloud infrastructure, deployments, and monitoring systems. Collaborate with development teams, architects, and other stakeholders to align on technical goals. Troubleshoot and resolve complex infrastructure and deployment issues. Mentor team members, fostering continuous learning and technical excellence. What We Expect From You: A growth mindset with a strong willingness to learn emerging technologies in cloud, orchestration, networking, and operating systems. Deep focus on AWS services, with the ability to build and optimize scalable infrastructure. Commitment to knowledge-sharing and skill development within the team. Ability to take ownership, work independently, and deliver results in a fast-paced environment.
About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Job Responsibilities: Design and Develop Data Pipelines: Development and optimisation of scalable data pipelines within Microsoft Fabric , leveraging fabric based notebooks, Dataflows Gen2 , Data Pipelines , and Lakehouse architecture . Build robust pipelines using both batch and real-time processing techniques. Integrate with Azure Data Factory or Fabric-native orchestration for seamless data movement. Microsoft Fabric Architecture: Work with the Data Architecture team to implement scalable, governed data architectures within OneLake and Microsoft Fabrics unified compute and storage platform. Align models with business needs, promoting performance, security, and cost-efficiency. Data Pipeline Optimisation: Continuously monitor, enhance, and optimise Fabric pipelines, notebooks, and lakehouse artifacts for performance, reliability, and cost. Implement best practices for managing large-scale datasets and transformations in a Fabric-first ecosystem. Collaboration with Cross-functional Teams: Work closely with analysts, BI developers, and data scientists to gather requirements and deliver high-quality, consumable datasets. Enable self-service analytics via certified and reusable Power BI datasets connected to Fabric Lakehouses. Documentation and Knowledge Sharing: Maintain clear, up-to-date documentation for all data pipelines, semantic models, and data products. Share knowledge of Fabric best practices and mentor junior team members to support adoption across teams. Microsoft Fabric Platform Expertise: Use your expertise in Microsoft Fabric , including Lakehouses , Notebooks , Data Pipelines , and Direct Lake , to build scalable solutions integrated with Business Intelligence layers , Azure Synapse , and other Microsoft data services. Required Skills and Qualifications: Experience in Microsoft Fabric / Azure Eco System : 7+ years working with Azure eco system , Relavent experience in Microsoft Fabric, including Lakehouse,oine lake, Data Engineering, and Data Pipelines components. Proficiency in Azure Data Factory and/or Dataflows Gen2 within Fabric for building and orchestrating data pipelines. Advanced Data Engineering Skills: Extensive experience in data ingestion, transformation, and ELT/ETL pipeline design . Ability to enforce data quality, testing, and monitoring standards in cloud platforms. Cloud Architecture Design: Experience designing modern data platforms using Microsoft Fabric , OneLake , and Synapse or equivalent. Strong / Indeapth SQL and Data Modelling: Expertise in SQL and data modelling (e.g., star/snowflake schemas) for Data intergation / ETL , reporting and analytics use cases. Collaboration and Communication: Proven ability to work across business and technical teams, translating business requirements into scalable data solutions. Cost Optimisation: Experience tuning pipelines and cloud resources (Fabric, Databricks, ADF) for cost-performance balance. Preferred Skills: Deep understanding of Azure , Microsoft Fabric ecosystem , including Power BI integration , Direct Lake , and Fabric-native security and governance . Familiarity with OneLake , Delta Lake , and Lakehouse architecture as part of a modern data platform strategy. Experience using Power BI with Fabric Lakehouses and DirectQuery/Direct Lake mode for enterprise reporting. Working knowledge of PySpark , strong SQL , and Python scripting within Fabric or Databricks notebooks. Understanding of Microsoft Purview , Unity Catalog , or Fabric-native governance tools for lineage, metadata, and access control. Experience with DevOps practices for Fabric or Power BI, including version control, deployment pipelines, and workspace management. Knowledge of Azure Databricks : Familiarity with building and optimising Spark-based pipelines and Delta Lake models as part of a modern data platform. is an added advantage.