Cymetrix

14 Job openings at Cymetrix
GCP Data Modeller Chennai,Tamil Nadu,India 5 years Not disclosed On-site Full Time

Bangalore / Chennai Hands-on data modelling for OLTP and OLAP systems In-depth knowledge of Conceptual, Logical and Physical data modelling Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same Strong understanding of variables impacting database performance for near-real-time reporting and application interaction. Should have working experience on at least one data modelling tool, preferably DBSchema, Erwin Good understanding of GCP databases like AlloyDB, CloudSQL, and BigQuery. People with functional knowledge of the mutual fund industry will be a plus Role & Responsibilities Work with business users and other stakeholders to understand business processes. Ability to design and implement Dimensional and Fact tables Identify and implement data transformation/cleansing requirements Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions Design, develop and maintain ETL workflows and mappings using the appropriate data load technique Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions. Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use. Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality. Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions. Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics. Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions. Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements. Train business end-users, IT analysts, and developers. Required Skills Bachelor’s degree in Computer Science or similar field or equivalent work experience. 5+ years of experience on Data Warehousing, Data Engineering or Data Integration projects. Expert with data warehousing concepts, strategies, and tools. Strong SQL background. Strong knowledge of relational databases like SQL Server, PostgreSQL, MySQL. Strong experience in GCP & Google BigQuery, Cloud SQL, Composer (Airflow), Dataflow, Dataproc, Cloud Function and GCS Good to have knowledge on SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS). Knowledge of AWS and Azure Cloud is a plus. Experience in Informatica Power exchange for Mainframe, Salesforce, and other new-age data sources. Experience in integration using APIs, XML, JSONs etc. Skills:- Data modeling, OLAP, OLTP, bigquery and Google Cloud Platform (GCP) Show more Show less

SIP Developer India 0 years Not disclosed On-site Full Time

Job Description: We are seeking a highly skilled Telephony Integration Developer with deep expertise in SIP (Session Initiation Protocol) and SIPREC (SIP Recording) to join our growing team. You will be responsible for designing, developing, and integrating telephony systems with a strong emphasis on VoIP communication, call recording, and SIP signaling. Responsibilities: ● Design and implement telephony integrations using SIP and SIPREC. ● Develop APIs and backend services to handle call control, call recording, and session management. ● Work with PBX systems, SIP Servers, and Media Servers for SIP call flows and media capture. ● Integrate third-party VoIP systems with internal applications and platforms. ● Analyze and troubleshoot SIP signaling and RTP media flows. ● Collaborate with cross-functional teams including DevOps, Product, and QA to deliver scalable solutions. ● Create technical documentation, diagrams, and support material. ● Ensure systems are secure, resilient, and scalable. Must-Have Skills: ● Strong experience with SIP protocol (INVITE, ACK, BYE, REGISTER, REFER OPTIONS, etc.) ● Practical experience with SIPREC for recording VoIP calls. ● Solid development skills in JavaScript (Node.js). ● Experience working with SIP Servers (e.g., FreeSWITCH, Asterisk, Kamailio, OpenSIPS). ● Hands-on knowledge of WebRTC, RTP streams, and VoIP media handling. ● Experience building and consuming RESTful APIs. ● Familiarity with call flows, SIP traces analysis (using Wireshark, sngrep, or similar). ● Strong understanding of networking basics (UDP, TCP, NAT traversal, STUN/TURN). ● Ability to troubleshoot and debug complex telephony and media issues. Good to Have Skills: ● Experience with Media Servers (e.g., Janus, Kurento, Mediasoup). ● Knowledge of Call Recording Systems architecture and compliance standards (PCI-DSS, GDPR). ● Experience with Cloud Telephony Platforms (Twilio, Genesys Cloud, Amazon Chime SDK, etc.). ● Familiarity with Session Border Controllers (SBCs). ● Prior experience with SIP trunking and carrier integrations. ● Exposure to Protocol Buffers or gRPC for real-time messaging. ● Understanding of security practices in VoIP (TLS, SRTP, SIP over WebSockets). ● Knowledge of Docker and Kubernetes for deploying SIP services at scale. ● Sound knowledge of telecom protocols like SIP/ICE/STUN/TURN/SRTP/DTLS/H323/Diameter/Radius ● Shall be thoroughly analytical and fix issues for SBC Portfolio of Products ● Shall be thorough with Linux/RTOS internals and product Architecture is preferred ● Strong Knowledge of TCP/UDP/IP and networking concepts is a must ● Knowledge of IP telephony, SIP, Call Routing Techniques of ARS, AAR on Trunk config environment ● Prior Experience on working with FreeSwitch, Kamailio & RTP Proxy, etc ● Strong understanding of Audio streaming/websockets and their application in real-time communication systems. ● In-depth knowledge of audio codecs and their impact on voice quality and bandwidth utilization. ● Experience with gRPC and Protobuf for building efficient and scalable communication interfaces. ● Extensive experience in large scale product development in Enterprise, webRTC, VoIP, VoLTE based products Base Language/Framework: ● Primary Language: JavaScript (Node.js backend) ● Frameworks/Tools: Express.js, Socket.io (for signaling if needed), Wireshark (for debugging), Sngrep. Show more Show less

Senior dotnet Developer Mumbai Metropolitan Region 5 years INR 8.0 - 16.0 Lacs P.A. On-site Full Time

Key Responsibilities Design, develop, and maintain scalable web applications using .NET Core, .NET Framework, C#, and related technologies. Participate in all phases of the SDLC, including requirements gathering, architecture design, coding, testing, deployment, and support. Build and integrate RESTful APIs, and work with SQL Server, Entity Framework, and modern front-end technologies such as Angular, React, and JavaScript. Conduct thorough code reviews, write unit tests, and ensure adherence to coding standards and best practices. Lead or support .NET Framework to .NET Core migration initiatives, ensuring minimal disruption and optimal performance. Implement and manage CI/CD pipelines using tools like Azure DevOps, Jenkins, or GitLab CI/CD. Containerize applications using Docker and deploy/manage them on orchestration platforms like Kubernetes or GKE. Lead and execute database migration projects, particularly transitioning from SQL Server to PostgreSQL. Manage and optimize Cloud SQL for PostgreSQL, including configuration, tuning, and ongoing maintenance. Leverage Google Cloud Platform (GCP) services such as GKE, Cloud SQL, Cloud Run, and Dataflow to build and maintain cloud-native solutions. Handle schema conversion and data transformation tasks as part of migration and modernization efforts. Required Skills & Experience 5+ years of hands-on experience with C#, .NET Core, and .NET Framework. Proven experience in application modernization and cloud-native development. Strong knowledge of containerization (Docker) and orchestration tools like Kubernetes/GKE. Expertise in implementing and managing CI/CD pipelines. Solid understanding of relational databases and experience in SQL Server to PostgreSQL migrations. Familiarity with cloud infrastructure, especially GCP services relevant to application hosting and data processing. Excellent problem-solving, communication, Skills:- C#, .NET, .NET Compact Framework, SQL, Microsoft Windows Azure, CI/CD, Google Cloud Platform (GCP), React.js and Data-flow analysis

azure data engineer Pune,Maharashtra,India 6 years None Not disclosed On-site Full Time

Data Engineer (Azure) Hybrid work mode - Pune/Bangalore/ Noida Minimum 6 years of experience. Max ctc 16 lpa (Azure) EDW Experience working in loading Star schema data warehouses using framework architectures including experience loading type 2 dimensions. Ingesting data from various sources (Structured and Semi Structured), hands on experience ingesting via APIs to lakehouse architectures. Key Skills: Azure Databricks, Azure Data Factory, Azure Datalake Gen 2 Storage, SQL (expert), Python (intermediate), Azure Cloud Services knowledge, data analysis (SQL), data warehousing,documentation – BRD, FRD, user story creation.

azure data engineer Pune,Maharashtra,India 0 years INR 10.0 - 18.0 Lacs P.A. On-site Full Time

Hybrid work mode (Azure) EDW Experience working in loading Star schema data warehouses using framework architectures including experience loading type 2 dimensions. Ingesting data from various sources (Structured and Semi Structured), hands on experience ingesting via APIs to lakehouse architectures. Key Skills: Azure Databricks, Azure Data Factory, Azure Datalake Gen 2 Storage, SQL (expert), Python (intermediate), Azure Cloud Services knowledge, data analysis (SQL), data warehousing,documentation – BRD, FRD, user story creation. Skills:- Windows Azure, SQL Azure, SQL, Data Warehouse (DWH), Data Analytics, Python, Star schema and Datawarehousing

backend developer Mumbai Metropolitan Region 2 years INR 1.0 - 18.0 Lacs P.A. On-site Full Time

Mumbai malad work from office 6 Days working 1 & 3 Saturday off AWS Expertise: Minimum 2 years of experience working with AWS services like RDS, S3, EC2, and Lambda. Roles And Responsibilities Backend Development: Develop scalable and high-performance APIs and backend systems using Node.js. Write clean, modular, and reusable code following best practices. Debug, test, and optimize backend services for performance and scalability. Database Management: Design and maintain relational databases using MySQL, PostgreSQL, or AWS RDS. Optimize database queries and ensure data integrity. Implement data backup and recovery plans. AWS Cloud Services: Deploy, manage, and monitor applications using AWS infrastructure. Work with AWS services including RDS, S3, EC2, Lambda, API Gateway, and CloudWatch. Implement security best practices for AWS environments (IAM policies, encryption, etc.). Integration and Microservices:Integrate third-party APIs and services. Develop and manage microservices architecture for modular application development. Version Control and Collaboration: Use Git for code versioning and maintain repositories. Collaborate with front-end developers and project managers for end-to-end project delivery. Troubleshooting and Debugging: Analyze and resolve technical issues and bugs. Provide maintenance and support for existing backend systems. DevOps and CI/CD: Set up and maintain CI/CD pipelines. Automate deployment processes and ensure zero-downtime releases. Agile Development: Participate in Agile/Scrum ceremonies such as daily stand-ups, sprint planning, and retrospectives. Deliver tasks within defined timelines while maintaining high quality. Required Skills Strong proficiency in Node.js and JavaScript/TypeScript. Expertise in working with relational databases like MySQL/PostgreSQL and AWS RDS. Proficient with AWS services including Lambda, S3, EC2, and API Gateway. Experience with RESTful API design and GraphQL (optional). Knowledge of containerization using Docker is a plus. Strong problem-solving and debugging skills. Familiarity with tools like Git, Jenkins, and Jira. Skills:- Amazon Web Services (AWS), AWS RDS, Amazon S3, Amazon EC2, AWS Lambda, NodeJS (Node.js), Fullstack Developer, Jenkins, Python and SQL

Machine Learning Engineer bengaluru,karnataka,india 3 years None Not disclosed On-site Full Time

Machine Learning Engineer Bangalore - 3 days work from office MAX 18.50 LPA Responsibilities ● Design, develop, and implement machine learning models and algorithms to solve complex business problems. ● Collaborate with data scientists to transition models from research and development to production-ready systems. ● Build and maintain scalable data pipelines for ML model training and inference using Databricks. ● Implement and manage the ML model lifecycle using MLflow for experiment tracking, model versioning, and model registry. ● Deploy and manage ML models in production environments on Azure, leveraging services like Azure Machine Learning, Azure Kubernetes Service (AKS), or Azure Functions. ● Support MLOps workloads by automating model training, evaluation, deployment, and monitoring processes. ● Ensure the reliability, performance, and scalability of ML systems in production. ● Monitor model performance, detect drift, and implement retraining strategies. ● Collaborate with DevOps and Data Engineering teams to integrate ML solutions into existing infrastructure and CI/CD pipelines. ● Document model architecture, data flows, and operational procedures. Qualifications ● Education: Bachelor’s or Master’s Degree in Computer Science, Engineering, Statistics, or a related quantitative field. ● Experience: Minimum 3+ years of professional experience as an ML Engineer or in a similar role. Skills: ● Strong proficiency in Python programming for data manipulation, machine learning, and scripting. ● Hands-on experience with machine learning frameworks such as Scikit-learn, TensorFlow, PyTorch, or Keras. ● Demonstrated experience with MLflow for experiment tracking, model management, and model deployment. ● Proven experience working with Microsoft Azure cloud services, specifically Azure Machine Learning, Azure Databricks, and related compute/storage services. ● Solid experience with Databricks for data processing, ETL, and ML model development. ● Understanding of MLOps principles and practices, including CI/CD for ML, model versioning, monitoring, and retraining. ● Experience with containerization technologies (Docker) and orchestration (Kubernetes, especially AKS) for deploying ML models. ● Familiarity with data warehousing concepts and SQL. ● Ability to work with large datasets and distributed computing frameworks. ● Strong problem-solving skills and attention to detail. ● Excellent communication and collaboration skills. Nice-to-Have Skills: ● Experience with other cloud platforms (AWS, GCP). ● Knowledge of big data technologies like Apache Spark. ● Experience with Azure DevOps for CI/CD pipelines. ● Familiarity with real-time inference patterns and streaming data. ● Understanding of responsible AI principles (fairness, explainability, privacy). Certifications: ● Microsoft Certified: Azure AI Engineer Associate ● Databricks Certified Machine Learning Associate (or higher)

Conversational Bot Engineer india 0 years None Not disclosed Remote Full Time

Conversation Bot Engineer Remote MAX - 35 LPA Must Have - Dailogflow, NLP, Python OR Node Required Skills: ● Experience in development of virtual agents (chatbots, voicebots) and natural language processing ● Experience working with one or more AI/NLP platforms - DialogFlow, Alexa, Converse.ai, Amazon Lex, Rasa, Luis, Kore.AI, Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein etc. ● Knowledge of one or more of the following technologies: Python, JavaScript or Node.js ● Experience in training chatbots by analyzing historical chat conversations or large amounts of user generated content and process data ● Practical knowledge of formal syntax, formal semantics, corpus analysis, dialogue management ● Strong written communication skills ● Ability to learn latest technologies ● Good problem-solving ability Nice to have skills: ● Understanding of conversational UI, voiced based processing (text to speech, speech to text) and voice apps built on Amazon Alexa or Google Home is a plus ● Experience in Test Driven Development & Agile methodologies ● Knowledge of creating an end to end pipeline for development of AI based conversational applications ● Perform text mining, generate and test working hypotheses, prepare and analyze historical data and identify patterns ● Ability to write regular expressions for data cleaning and preprocessing ● Understanding of API integrations, Single-Sign-on and token-based authentication ● Develop unit test cases as per project specific standards ● Experience with HTTP, Sockets, REST and other web services ● Perform keyword and topic extraction from chat logs ● Knowledge of training and tuning topic modelling algorithms like LDA and NMF ● Understanding of training classical Machine learning algorithms along with an understanding of choosing the right evaluation metric ● Knowledge of frameworks like nltk and spacy ● Ability to write regular expressions for

SIP Developer india 0 years None Not disclosed On-site Full Time

Responsibilities: ● Design and implement telephony integrations using SIP and SIPREC. ● Develop APIs and backend services to handle call control, call recording, and session management.. ● Work with PBX systems, SIP Servers, and Media Servers for SIP call flows and media capture. ● Integrate third-party VoIP systems with internal applications and platforms. ● Analyze and troubleshoot SIP signaling and RTP media flows. ● Collaborate with cross-functional teams including DevOps, Product, and QA to deliver scalable solutions. ● Create technical documentation, diagrams, and support material. ● Ensure systems are secure, resilient, and scalable. Must-Have Skills: ● Strong experience with SIP protocol (INVITE, ACK, BYE, REGISTER, REFER, OPTIONS, etc.) ● Practical experience with SIPREC for recording VoIP calls. ● Solid development skills in JavaScript (Node.js). ● Experience working with SIP Servers (e.g., FreeSWITCH, Asterisk, Kamailio, OpenSIPS). ● Hands-on knowledge of WebRTC, RTP streams, and VoIP media handling. ● Experience building and consuming RESTful APIs. ● Familiarity with call flows, SIP traces analysis (using Wireshark, sngrep, or similar). ● Strong understanding of networking basics (UDP, TCP, NAT traversal, STUN/TURN). ● Ability to troubleshoot and debug complex telephony and media issues. Good to Have Skills: ● Experience with Media Servers (e.g., Janus, Kurento, Mediasoup). ● Knowledge of Call Recording Systems architecture and compliance standards (PCI-DSS, GDPR). ● Experience with Cloud Telephony Platforms (Twilio, Genesys Cloud, Amazon Chime SDK, etc.). ● Familiarity with Session Border Controllers (SBCs). ● Prior experience with SIP trunking and carrier integrations. ● Exposure to Protocol Buffers or gRPC for real-time messaging. ● Understanding of security practices in VoIP (TLS, SRTP, SIP over WebSockets). ● Knowledge of Docker and Kubernetes for deploying SIP services at scale. ● Sound knowledge of telecom protocols like SIP/ICE/STUN/TURN/SRTP/DTLS/H323/Diameter/Radius ● Shall be thoroughly analytical and fix issues for SBC Portfolio of Products ● Shall be thorough with Linux/RTOS internals and product Architecture is preferred ● Strong Knowledge of TCP/UDP/IP and networking concepts is a must ● Knowledge of IP telephony, SIP, Call Routing Techniques of ARS, AAR on Trunk config environment ● Prior Experience on working with FreeSwitch, Kamailio & RTP Proxy, etc ● Strong understanding of Audio streaming/websockets and their application in real-time communication systems. ● In-depth knowledge of audio codecs and their impact on voice quality and bandwidth utilization. ● Experience with gRPC and Protobuf for building efficient and scalable communication interfaces. ● Extensive experience in large scale product development in Enterprise, webRTC, VoIP, VoLTE based products Base Language/Framework: ● Primary Language: JavaScript (Node.js backend) ● Frameworks/Tools: Express.js, Socket.io (for signaling if needed), Wireshark (for debugging), Sngrep.

GCP Data Engineer india 4 years None Not disclosed Remote Full Time

Role: Senior Data Engineer Experience Level: 4 to 7 Years Work location: Remote Max 20 LPA Must have skills: 1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, Airflow/Composer, Python(preferred)/Java 2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges 3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP 4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At least 2 databases) 5. Data Warehouse concepts - Beginner to Intermediate level Required Skills: ● Bachelor’s degree in Computer Science or similar field or equivalent work experience. ● 3+ years of experience on Data Warehousing, Data Engineering or Data Integration projects. ● Expert with data warehousing concepts, strategies, and tools. ● Strong SQL background. ● Strong knowledge of relational databases like SQL Server, PostgreSQL, MySQL. ● Strong experience in GCP & Google BigQuery, Cloud SQL, Composer (Airflow), Dataflow, Dataproc, Cloud Function and GCS ● Good to have knowledge on SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS). ● Knowledge of AWS and Azure Cloud is a plus. ● Experience in Informatica Power exchange for Mainframe, Salesforce, and other new-age data sources. ● Experience in integration using APIs, XML, JSONs etc. ● In-depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework, data-warehousing and Data Lakes ● Good understanding of SDLC, Agile and Scrum processes. ● Strong problem-solving, multi-tasking, and organizational skills. ● Highly proficient in working with large volumes of business data and strong understanding of database design and implementation. ● Good written and verbal communication skills. ● Demonstrated experience of leading a team spread across multiple locations. Role & Responsibilities: ● Work with business users and other stakeholders to understand business processes. ● Ability to design and implement Dimensional and Fact tables ● Identify and implement data transformation/cleansing requirements ● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse ● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions ● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique ● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions. ● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use. ● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality. ● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions. ● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics. ● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions. ● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements. ● Train business end-users, IT analysts, and developers.

AWS Data Architect india 6 years None Not disclosed On-site Full Time

Senior AWS Data Architect Max 28 LPA Key Responsibilities:  Design and implement scalable, secure, and high-performance Big Data architectures using Databricks, Apache Spark, and cloud-native services.  Lead the end-to-end data architecture lifecycle, from requirements gathering to deployment and optimization.  Design repeatable and reusable data ingestion pipelines for bringing in data from ERP source systems like SAP, Salesforce, HR, Factory, Marketing systems etc.  Collaborate with cross-functional teams to integrate SAP data sources into modern data platforms.  Drive cloud cost optimization strategies and ensure efficient resource utilization.  Provide technical leadership and mentorship to a team of data engineers and developers.  Develop and enforce data governance, data quality, and security standards.  Translate complex business requirements into technical solutions and data models.  Stay current with emerging technologies and industry trends in data architecture and analytics. Required Skills & Qualifications:  6+ years of experience in Big Data architecture, Data Engineering and AI assisted BI solutions within Databricks and AWS technologies.  3+ Years of Experience with AWS Data services like S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS and others  3+ Years of Experience in building Delta Lakes and open formats using technologies like Databricks and AWS Analytics Services.  Bachelor’s degree in computer science, information technology, data science, data analytics or related field  Proven expertise in Databricks, Apache Spark, Delta Lake, and MLflow.  Strong programming skills in Python, SQL, and PySpark.  Experience with SAP data extraction and integration (e.g., SAP BW, S/4HANA, BODS).  Hands-on experience with cloud platforms (Azure, AWS, or GCP), especially in cost optimization and data lakehouse architectures.  Solid understanding of data modeling, ETL/ELT pipelines, and data warehousing.  Demonstrated team leadership and project management capabilities.  Excellent communication, problem solving and stakeholder management skills. Preferred Qualifications:  Experience in the manufacturing domain, with knowledge of production, supply chain, and quality data.  Certifications in Databricks, cloud platforms, or data architecture.  Familiarity with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform).

GCP Data Engineer india 0 years INR 8.0 - 20.0 Lacs P.A. On-site Full Time

Must Have Skills GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, Airflow/Composer, Python(preferred)/Java ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At least 2 databases) Data Warehouse concepts - Beginner to Intermediate level Role & Responsibilities Work with business users and other stakeholders to understand business processes. Ability to design and implement Dimensional and Fact tables Identify and implement data transformation/cleansing requirements Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions Design, develop and maintain ETL workflows and mappings using the appropriate data load technique Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions. Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use. Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality. Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions. Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics. Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions. Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements. Train business end-users, IT analysts, and developers. Skills:- Google Cloud Platform (GCP), ETL, Python, Big Data, SQL, Data integration, dataproc, Apache Airflow and bigquery

Data Engineerâ Architect / Lead india 10 years INR 15.0 - 36.0 Lacs P.A. On-site Full Time

Experience Level 10+ years of experience in data engineering, with at least 3–5 years providing architectural guidance, leading teams, and standardizing enterprise data solutions. Must have deep expertise in Databricks, GCP, and modern data architecture patterns. Key Responsibilities Provide architectural guidance and define standards for data engineering implementations. Lead and mentor a team of data engineers, fostering best practices in design, development, and operations. Own and drive improvements in performance, scalability, and reliability of data pipelines and platforms. Standardize data architecture patterns and reusable frameworks across multiple projects. Collaborate with cross-functional stakeholders (Product, Analytics, Business) to align data solutions with organizational goals. Design data models, schemas, and dataflows for efficient storage, querying, and analytics. Establish and enforce strong data governance practices, ensuring security, compliance, and data quality. Work closely with governance teams to implement lineage, cataloging, and access control in compliance with standards. Design and optimize ETL pipelines using Databricks, PySpark, and SQL. Ensure robust CI/CD practices are implemented for data workflows, leveraging Terraform and modern DevOps practices. Leverage GCP services such as Cloud Functions, Cloud Run, BigQuery, Pub/Sub, and Dataflow for building scalable solutions. Evaluate and adopt emerging technologies, with exposure to Gen AI and advanced analytics capabilities. Qualifications & Skills Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. Extensive hands-on experience with Databricks (Autoloader, DLT, Delta Lake, CDF) and PySpark. Expertise in SQL and advanced query optimization. Proficiency in Python for data engineering and automation tasks. Strong expertise with GCP services: Cloud Functions, Cloud Run, BigQuery, Pub/Sub, Dataflow, GCS. Deep understanding of CI/CD pipelines, infrastructure-as-code (Terraform), and DevOps practices. Proven ability to provide architectural guidance and lead technical teams. Experience designing data models, schemas, and governance frameworks. Knowledge of Gen AI concepts and ability to evaluate practical applications. Excellent communication, leadership, and stakeholder management skills. Skills:- Google Cloud Platform (GCP), databricks, Architecture, bigquery, Google Cloud Storage, Generative AI and Dataflow architecture

frontend developer india 3 years INR 10.0 - 21.0 Lacs P.A. Remote Full Time

Remote opening min 3.5 years What You’ll Do You will be working as a senior software engineer within the healthcare domain, where you will focus on module level integration and collaboration across other areas of projects, helping healthcare organizations achieve their business goals with use of full stack technologies, cloud services & DevOps. You will be working with Architects from other specialties such as cloud engineering, data engineering, ML engineering to create platforms, solutions and applications that cater to latest trends in the healthcare industry such as digital diagnosis, software as a medical product, AI marketplace, amongst others. Focuses on module level integration and collaboration across other areas of projects Role & Responsibilities We are looking for a Full Stack Developer who is motivated to combine the art of design with programming.Responsibilities will include translation of the UI/UX design wireframes to actual code that will produce visual elements of the application. You will work with the UI/UX designer and bridge the gap between graphical design and technical implementation, taking an active role on both sides and defining how the application looks as well as how it works. Develop new user-facing features Build reusable code and libraries for future use Ensure the technical feasibility of UI/UX designs Optimize application for maximum speed and scalability Assure that all user input is validated before submitting to back-end Collaborate with other team members and stakeholders Would be responsible to provide stable technical solutions which are robust and scalable as pe business needs Skills Expectation Must have Frontend: Proficient understanding of web markup, including HTML5, CSS3 Basic understanding of server-side CSS pre-processing platforms, such as LESS and SASS Proficient understanding of client-side scripting and JavaScript frameworks, including jQuery Good understanding of at least one of the advanced JavaScript libraries and frameworks such as AngularJS, KnockoutJS, BackboneJS, ReactJS etc. Familiarity with one or more modern front-end frameworks such as Angular 15+, React, VueJS, Backbone. Good understanding of asynchronous request handling, partial page updates, and AJAX. Proficient understanding of cross-browser compatibility issues and ways to work around them. Experience with generic Angular testing frameworks Skills:- AngularJS (1.x), Angular (2+), Javascript and HTML/CSS