Location : Remote Mode : Contract (2 Months with Possible Extension) Years of experience : 3+ Years Shift : UK shift Job Summary We are looking for a highly motivated and detail-oriented Data Engineer with a strong background in data cleansing, Python scripting, and SQL to join our team. The ideal candidate will play a critical role in ensuring data quality, transforming raw datasets into actionable insights, and supporting data-driven decision-making across the organization. Key Responsibilities Design and implement efficient data cleansing routines to remove duplicates, correct anomalies, and validate data integrity. Write robust Python scripts to automate data processing, transformation, and integration tasks. Develop and optimize SQL queries for data extraction, aggregation, and reporting. Work closely with data analysts, business stakeholders, and engineering teams to understand data requirements and deliver clean, structured datasets. Build and maintain data pipelines that support large-scale data processing. Monitor data workflows and troubleshoot issues to ensure accuracy and reliability. Contribute to documentation of data sources, transformations, and cleansing logic. Requirements Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field. 3+ years of hands-on experience in data engineering, with a focus on data quality and cleansing. Strong proficiency in Python, including libraries like Pandas and NumPy. Expert-level knowledge of SQL and working with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Familiarity with data profiling tools and techniques. Excellent problem-solving skills and attention to detail. Good communication and documentation skills. Preferred Qualifications Experience with cloud platforms (AWS, Azure, GCP) and data services (e.g, S3, BigQuery, Redshift). Knowledge of ETL tools like Apache Airflow, Talend, or similar. Exposure to data governance and data cataloging practices (ref:hirist.tech)
Location : Remote Years of experience : 15+ Years Shift : US shift Job Summary We are looking for a seasoned UI Path Process Mining Consultant with 15-20 years of experience in the manufacturing industry. The ideal candidate will have a deep understanding of manufacturing operations, extensive experience in process optimization, and hands-on expertise with UI Path Process Mining. Familiarity with QAD ERP systems and strong integration knowledge is essential. This role will involve working with cross-functional stakeholders to uncover inefficiencies, enable process transparency, and drive data-led decision-making for continuous improvement. Key Responsibilities Lead comprehensive Process Mining initiatives using UI Path across manufacturing business functions. Partner with stakeholders across operations, quality, supply chain, and IT to map and analyze critical processes. Integrate various enterprise systems, especially QAD ERP, MES, and PLM platforms, with the Process Mining tool. Interpret large volumes of data to identify bottlenecks, redundancies, and areas suitable for automation. Create visual dashboards, KPIs, and actionable reports to support executive decision-making. Collaborate with CoE teams to translate mined insights into automated process pipelines. Ensure compliance with audit and governance frameworks through detailed documentation and process transparency. Guide junior team members and act as a domain expert in manufacturing and process transformation. Required Skills & Qualifications Bachelor's/Master's degree in Engineering, Computer Science, Operations, or a related discipline. 15-20 years of experience in the manufacturing sector, with a focus on process improvement, automation, or digital transformation. Minimum 3-5 years of hands-on experience with UI Path Process Mining or similar tools (e.g., Celonis, Signavio). Proven experience with QAD ERP and understanding of its data structures and process flows. Deep knowledge of manufacturing business processes - including production, quality, logistics, and supply chain. Strong skills in SQL, data modeling, and business intelligence/reporting tools. Excellent communication skills with the ability to influence cross-functional stakeholders. Strategic mindset with a track record of driving value through data and automation. Preferred Qualifications UI Path certifications in Process Mining, Automation, or RPA. Exposure to Lean, Six Sigma, or other process improvement frameworks. Experience leading digital transformation projects in a manufacturing environment. Familiarity with integration platforms and data lakes (ref:hirist.tech)
Job Description Job Title : QA Functional Tester (Web, Mobile & AI Applications) Location : Coimbatore / Remote Job Type : Contract Experience Level : 8+ Years Overview We are seeking an experienced QA Functional Tester to ensure the quality, reliability, and performance of our Web and Mobile applications integrated with AI-driven functionalities. The ideal candidate will have strong expertise in functional testing, data validation, and AI/ML application testing, with the ability to analyze source data, workflows, and application-level outputs to guarantee accuracy and compliance. Key Responsibilities Perform end-to-end functional testing of web and mobile applications, covering UI, workflows, integrations, and performance. Validate responsive design, usability, and cross-browser/device compatibility. Test AI-driven features such as predictions and recommendations for accuracy, fairness, and relevance. Analyze training datasets, source data, and model outputs to ensure integrity and correctness. Perform data validation across APIs, ETL pipelines, and third-party integrations. Develop detailed test strategies, plans, and cases for functional, regression, and exploratory testing. Collaborate with automation teams to integrate test cases into CI/CD pipelines. Document defects, track resolutions, and provide detailed QA reports for sign-off. Support UAT and production release validations. Required Skills & Qualifications 8+ years of experience in QA functional testing (Web & Mobile). Hands-on experience testing AI/ML applications (model validation, fairness, explainability). Strong expertise in SQL, data profiling, and API validation. Proficiency with test management tools (Jira, TestRail, Azure DevOps, etc. Experience with API testing tools (Postman, REST Assured, etc. Understanding of AI concepts (classification, regression, NLP, recommendation systems). Strong analytical and problem-solving skills. Excellent communication and collaboration skills. (ref:hirist.tech)
Job Summary We are seeking a Data Engineer with expertise in SQL, Data Modeling, AWS Glue, Python, and MS Excel. The ideal candidate will design scalable data solutions, build reliable pipelines, and ensure high-quality, well-modeled data to drive business insights. Key Responsibilities Design and maintain robust data models to support analytics and reporting needs. Develop and optimize ETL pipelines using AWS Glue for scalable data processing. Write and tune SQL queries to extract, transform, and analyze large datasets. Use Python for data automation, validation, and process improvement. Leverage MS Excel for advanced data analysis, reporting, and validation tasks. Collaborate with cross-functional teams to understand business requirements and translate them into effective data solutions. Ensure data accuracy, reliability, and governance across all environments. Requirements 5+ years of experience in data engineering or related roles. Strong proficiency in SQL for complex query development and optimization. Expertise in data modeling techniques (relational and dimensional). Hands-on experience with AWS Glue for building and managing ETL pipelines. Proficiency in Python for scripting and workflow automation. Advanced MS Excel skills for reporting, validation, and analysis. Excellent problem-solving skills, attention to detail, and ability to work in a collaborative environment. (ref:hirist.tech)
Job Description Name of the position : QuickSight Engineer Location : Remote of resources needed : 01 Mode : Contract Years of experience : 5+ Years Shift : UK shift Job Summary We are seeking a QuickSight Engineer with expertise in AWS QuickSight, SQL, AWS Glue, and performance optimization. The ideal candidate will be responsible for building high-performing dashboards, optimizing data pipelines, and enabling actionable insights for business teams. Key Responsibilities Design, develop, and maintain interactive dashboards and visualizations using AWS QuickSight. Write and optimize SQL queries to ensure efficient data extraction and transformation. Use AWS Glue to build and maintain scalable data pipelines for reporting and analytics. Optimize dashboard performance, ensuring fast load times and responsive user experience. Collaborate with business stakeholders to gather requirements and translate them into effective data solutions. Ensure data accuracy, reliability, and security across all reporting solutions. Stay updated with AWS best practices to enhance reporting and analytics capabilities. Requirements 5+ years of experience in data engineering, BI, or analytics roles with a focus on AWS QuickSight. Strong proficiency in AWS QuickSight for dashboard development and advanced visualization. Expert knowledge of SQL for querying, data preparation, and optimization. Hands-on experience with AWS Glue for building and managing ETL pipelines. Proven track record of dashboard and query performance tuning to meet business SLAs. Strong analytical mindset, attention to detail, and ability to solve complex data challenges. Excellent collaboration and communication skills to work with cross-functional teams. (ref:hirist.tech)
Role Overview We are seeking a highly experienced Backend .NET API Developer with strong expertise in designing and building secure, scalable APIs using the Microsoft .NET stack and hands-on experience with AWS DynamoDB. The ideal candidate will have experience delivering enterprise-grade backend solutions, and prior exposure to Medical Devices, Healthcare, or Pharmaceuticals will be considered a strong plus. This role will be pivotal in architecting and implementing robust backend services, integrating with internal and external systems, and ensuring security, performance, and scalability. Key Responsibilities API Development & Architecture : Design, develop, and maintain RESTful and/or GraphQL APIs using .NET Core / ASP.NET Core. Architect backend systems with scalability, performance, and maintainability in mind. Integrate APIs with various data sources, including AWS DynamoDB, relational databases, and external services. AWS DynamoDB Expertise Hands-on experience designing table structures, indexes, and query patterns for high-performance workloads. Implement DynamoDB best practices, including data modeling for scalability, TTL, streams, and backup/restore. Optimize cost and performance for high-volume workloads. Cloud & Infrastructure Deploy, manage, and monitor services in AWS (Lambda, API Gateway, S3, CloudWatch, IAM, etc. Collaborate with DevOps teams to implement CI/CD pipelines and automated deployments. Collaboration & Leadership Work closely with frontend developers, QA teams, and product managers to deliver features end-to-end. Mentor junior developers, conduct code reviews, and enforce coding standards. Participate in architectural discussions and technology selection. Required Qualifications Technical Skills : 10+ years of professional software development experience. Strong experience in .NET Core / ASP.NET Core, C#, and object-oriented programming. Proven expertise in AWS DynamoDB (data modeling, performance tuning, and operational best practices). Experience with AWS serverless services (Lambda, API Gateway, SQS/SNS, Step Functions). Solid understanding of microservices architecture and distributed systems. Proficient with relational databases (SQL Server, PostgreSQL) and ORM frameworks (Entity Framework). Knowledge of secure API design (OAuth2, JWT, API rate limiting). Other Skills Excellent problem-solving and debugging skills. Strong communication skills for cross-functional collaboration. Ability to work in agile environments with distributed teams. Preferred Qualifications Experience with event-driven architectures (Kafka, Kinesis). Experience with containerization (Docker, Kubernetes). Familiarity with CI/CD using Azure DevOps, GitHub Actions, or AWS CodePipeline. Background in regulated software development lifecycle (SDLC). Prior experience in Medical Devices, Healthcare, or Pharmaceuticals domains. Familiarity with compliance and quality standards (HIPAA, HL7/FHIR, FDA 21 CFR Part 11, GxP). Exposure to FHIR/HL7 healthcare interoperability standards. (ref:hirist.tech)
Description Name of the position : Data Engineer Location : Coimbatore / Remote Of resources needed : 01 Mode : Contract to Hire Years of experience : 15+ Years About The Role We are seeking a highly skilled and driven Data Engineering Lead to lead our data engineering team. The ideal candidate combines strong leadership and technical expertise with the ability to deliver results under tight timelines. You will own the end-to-end design, development, and delivery of scalable data platforms and solutions, ensuring data availability, reliability, and usability across the organization. Key Responsibilities Lead, mentor, and grow a high-performing team of data engineers. Own the design, implementation, and optimization of data pipelines, data models, and data architectures across OLAP and operational systems. Drive critical data initiatives including ingestion, transformation, governance, and analytics enablement. Partner with product, analytics, and business teams to align data solutions with business objectives, especially in supply chain domain. Ensure timely and high-quality delivery of data solutions, meeting strict deadlines and performance benchmarks. Foster a culture of ownership, accountability, and continuous improvement within the team. Provide technical direction on AWS cloud-native data services including Redshift, Glue, and Quicksight. Oversee the creation and optimization of stored procedures, SQL scripts, Python utilities, and JSON-based workflows. Define and enforce standards for data modeling, metadata management, and data layer architecture. Collaborate with stakeholders to design data visualization and reporting solutions (Figma, QuickSight, etc. Apply critical thinking to solve complex data challenges and guide the organization toward data-driven decision-making. Required Skills & Experience Leadership : Proven experience managing and mentoring a team of data engineers. Technical Expertise Strong proficiency in AWS Redshift, Glue, Quicksight. Advanced SQL and Stored Procedures development. Experience with Python, JSON, and automation scripting. Deep knowledge of data modeling, OLAP systems, and data layer architecture. Domain Knowledge : Understanding of Supply Chain systems and processes. Tools & Platforms : Experience with Figma (for data/UX design) and visualization frameworks. Soft Skills : Ownership mindset, ability to meet high-performance delivery timelines, and excellent critical thinking. Preferred Qualifications Bachelors or Masters degree in Computer Science, Data Engineering, or related field. Experience working in fast-paced environments with high-volume data systems. Exposure to model engineering and advanced analytics pipelines. Prior experience with data governance frameworks and cloud data warehousing best practices. (ref:hirist.tech)
Description Name of the position : Microsoft 365 Migration Consultant. Location : Remote. Mode : Contract (3 Months). Years of experience : 4+ Years. Shift : UK shift. Overview We are seeking an experienced Microsoft 365 Migration Engineer with 4 to 8 years of hands-on experience in managing and executing complex migrations across Exchange Online, OneDrive for Business, SharePoint Online, and Teams. The ideal candidate will possess strong technical expertise in Microsoft 365 services, migration tools, automation scripting, and governance. This role requires excellent analytical, communication, and stakeholder management skills to ensure seamless migration and user adoption in enterprise : Plan, execute, and deliver successful migrations across Microsoft 365 workloads, including Exchange, OneDrive, SharePoint, and Teams. Conduct pre-migration assessments, develop detailed migration strategies, and oversee cut-over activities. Utilize industry-standard migration tools such as Quest, Sharegate, BitTitan, DryvIQ, Metalogix, or Microsoft Migration Manager for data migration. Develop and maintain automation scripts using PowerShell or Graph API to streamline migration processes. Implement and manage Microsoft 365 governance, security, and compliance frameworks including identity (Azure AD/Entra), DLP, retention, and archiving policies. Monitor and report on migration progress, performance metrics, and data integrity post-migration. Collaborate closely with IT, business stakeholders, and end-users to ensure alignment with business goals. Provide change management and end-user adoption support, including training, FAQs, and transition documentation. Handle large-scale data migrations involving terabytes of data and hundreds of users/sites with high accuracy and minimal downtime. Qualifications Required Skills & Experience : 48 years of experience working with Microsoft 365 services : Exchange Online, OneDrive for Business, SharePoint Online, and Teams. Proven experience executing email/OneDrive/SharePoint migrations (on-premises ? cloud, cloud-to-cloud, or tenant-to-tenant). Strong knowledge of migration tools (Quest, Sharegate, BitTitan, Metalogix, etc.) and automation scripting (PowerShell, Graph API). Deep understanding of Microsoft 365 governance, security, identity (Azure AD/Entra), licensing, DLP, retention, and compliance. Strong analytical, troubleshooting, and problem-solving abilities with attention to detail. Excellent communication and stakeholder management skills across technical and business teams. Experience in change management and driving user adoption initiatives post-migration. Preferred Skills Microsoft 365 Certification (e.g., MS-100, MS-101, or related). Experience with mergers and acquisitions IT integrationstenant consolidation, separation, or carve-outs. Knowledge of legacy systems (Exchange, file shares, Google Workspace, etc.) and data mapping to Microsoft 365. Experience handling large tenant-to-tenant migrations (>100 users). Familiarity with Microsoft Power Platform (Power Apps/Power Automate) as part of the migration scope. Ability to create migration dashboards, metrics, and executive reports for management. (ref:hirist.tech)
Description Name of the position : Project Manager - Data Engineering Location : Coimbatore/ Chennai/ Bangalore No. of resources needed : 01 Mode : Mode : Contract to Hire (Min. 3 Months Contract) Years of experience : 10+ Years Shift : UK shift (2pm to 11pm) Overview We are looking for an experienced Data Engineering Project Manager to lead end-to-end delivery of data platform, analytics, and integration projects. The ideal candidate will have a strong foundation in data engineering concepts, combined with proven project management and stakeholder leadership skills. This role requires hands-on understanding of data pipelines, data warehousing, cloud platforms, and data governance, ensuring projects are delivered on time, within scope, and aligned with business objectives. Key Responsibilities Lead and manage data engineering and analytics projects from initiation to deployment. Work closely with stakeholders to define project scope, deliverables, timelines, and resources. Collaborate with data engineers, architects, and analysts to ensure technical feasibility and quality outcomes. Oversee data pipeline development, ETL/ELT processes, and cloud data platform implementation. Track and manage project risks, dependencies, and budgets. Ensure alignment with data governance, quality, and security standards. Conduct regular status reviews, stakeholder updates, and team stand-ups. Identify opportunities for process automation, optimization, and best practice adoption. Manage vendor coordination and cross-functional collaboration across business and IT teams. Required Skills & Qualifications 8+ years of experience in IT, with 3-5 years in project management and a strong background in data engineering or analytics. Hands-on understanding of ETL tools, data pipelines, and data integration frameworks. Experience managing projects using Snowflake, Azure Data Factory, AWS Glue, or GCP BigQuery. Solid understanding of data modeling, warehousing, and data lake architectures. Proficient in Agile and Waterfall project methodologies. Strong experience with SQL and exposure to Python or Spark is a plus. Familiarity with reporting tools (Power BI, Tableau) and data governance frameworks. Excellent communication, leadership, and stakeholder management skills. Experience using Jira, Confluence, or MS Project for tracking and documentation (ref:hirist.tech)
Description Job Title : Data Engineer (GCP) Experience : 5+ Years Location : Remote/India Shift : UK Shift (1 PM 10 PM IST) Employment Type : Full-time / Contract (as applicable) About The Role We are seeking a highly skilled Data Engineer with hands-on experience in Google Cloud Platform (GCP) to join our growing data team. The ideal candidate will bring strong expertise in building scalable data pipelines, transforming large data sets, and implementing cloud-based solutions while working collaboratively with business and engineering stakeholders. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL/ELT solutions in GCP. Work with tools such as BigQuery, Cloud Storage, Cloud Composer, Dataflow, Dataproc, Pub/Sub, and other GCP native services. Implement data models and optimize data performance, reliability, and accessibility. Collaborate with cross-functional teams including Data Analysts, Product, and Engineering to understand data needs and deliver solutions. Build, manage, and support CI/CD workflows and DevOps processes for data engineering solutions. Ensure data security, compliance, governance, and adherence to best practices. Troubleshoot and optimize existing pipelines for scalability and performance. Required Skills & Qualifications Minimum 5 years of professional experience as a Data Engineer. Strong hands-on expertise with Google Cloud Platform (GCP). Proficiency in SQL and Python for data processing and automation. Experience with data warehousing solutions (preferably BigQuery). Strong understanding of ETL/ELT frameworks, data modeling concepts, and distributed processing. Knowledge of CI/CD workflows and version control (Git, Cloud Build, Terraform etc.). Experience working in Agile environments. Nice To Have Experience with : DBT Airflow Machine learning pipelines Kafka or streaming technologies GCP Professional Data Engineer certification. Work Schedule Must be able to work in the UK Shift (1 PM 10 PM IST) with flexibility based on delivery and stakeholder collaboration. Why Join Us ? Opportunity to work with global teams and modern cloud data platforms. Competitive compensation and growth opportunities. Collaborative, innovative work culture supporting learning and certifications. (ref:hirist.tech)