Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Purpose: We are looking for a highly skilled and experienced Data Engineering professional to lead our data engineering team. The ideal candidate will possess a strong technical background, strong project management abilities, and excellent client handling/stakeholder management skills. This role requires a strategic thinker who can drive the design, development and implementation of data solutions that meet our clients’ needs while ensuring the highest standards of quality and efficiency. Job Responsibilities Technology Leadership – Lead guide the team independently or with little support to design, implement deliver complex cloud-based data engineering / data warehousing project assignments Managing projects in fast paced agile ecosystem and ensuring quality deliverables within stringent timelines Responsible for Risk Management, maintaining the Risk documentation and mitigations plan. Drive continuous improvement in a Lean/Agile environment, implementing DevOps delivery approaches encompassing CI/CD, build automation and deployments. Communication & Logical Thinking – Demonstrates strong analytical skills, employing a systematic and logical approach to data analysis, problem-solving, and situational assessment. Capable of effectively presenting and defending team viewpoints, while securing buy-in from both technical and client stakeholders. Handle Client Relationship – Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Work Experience Should have expertise and 8+ years of working experience in at least two ETL tools among Matillion, DBT, Pyspark/python, Informatica, and Talend Should have expertise and working experience in at least two databases among Databricks, Redshift, Snowflake, SQL Server, Oracle Should have strong Data Warehousing, Data Integration and Data Modeling fundamentals like Star Schema, Snowflake Schema, Dimension Tables and Fact Tables. Strong experience on SQL building blocks. Creating complex SQL queries and Procedures. Experience in AWS or Azure cloud and its service offerings Aware of techniques such as: Data Modelling, Performance tuning and regression testing Willingness to learn and take ownership of tasks. Excellent written/verbal communication and problem-solving skills and Understanding and working experience on Pharma commercial data sets like IQVIA, Veeva, Symphony, Liquid Hub, Cegedim etc. would be an advantage Good experience working in pharma or life sciences domain projects Education BE/B.Tech, MCA, M.Sc., M. Tech with 60%+ Why Axtria: - Axtria is a global provider of cloud software and data analytics to the Life Sciences industry. We help Life Sciences companies transform the product commercialization journey to drive sales growth and improve healthcare outcomes for patients. We are acutely aware that our work impacts millions of patients and lead passionately to improve their lives. We will provide– (Employee Value Proposition) Offer an inclusive environment that encourages diverse perspectives and ideas Deliver challenging and unique opportunities to contribute to the success of a transforming organization Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online Axtria Institute, knowledge sharing opportunities globally, learning opportunities through external certifications Sponsored Tech Talks & Hackathons Possibility to relocate to any Axtria office for short and long-term projects Benefit package: Health benefits Retirement benefits Paid time off Flexible Benefits Hybrid /FT Office Axtria is an equal-opportunity employer that values diversity and inclusiveness in the workplace. A few more links are mentioned below, you may want to go through to know more about Axtria’s journey as an Organization, its culture, products and solutions offerings. For White papers: Research Hub: https://www.axtria.com/axtria-research-hub-pharmaceutical-industry/ For Axtria product and capability related content: 5 step guides: https://www.axtria.com/axtria-5-step-guides-sales-marketing-data-management-best-practices/ For recent marketing videos, including Jassi’s public discussions: Video Wall: https://www.axtria.com/video-wall/ Infographic Points of view on industry, Therapy areas etc.: https://www.axtria.com/video-wall/
Posted 15 hours ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Dear Aspirants, Greetings from ValueLabs! We are hiring for Power BI Lead Role: Power BI Lead Skill Set: Power BI, SQL, ADF. Experience: 8+ Years Notice Period: Immediate Location: Hyderabad Roles and Responsibilities: : Experience Required: Minimum 8 years of relevant experience in data engineering, specifically with Power BI and Azure Data Engineering Key Responsibilities: 1. Data Engineering: a. Develop and maintain ETL processes using Azure Data Factory. b. Design and implement data warehouses using Azure Synapse Analytics. c. Optimize data storage and retrieval strategies to ensure efficient use of resources and fast query performance. d. Implement data quality checks and validation processes to ensure accuracy and reliability of data. 2. Power BI Reporting: a. Create compelling and interactive reports and dashboards using Power BI Desktop. b. Design and implement Power BI data models that efficiently integrate with various data sources. c. Automate report delivery and scheduling using Power Automate or similar tools. d. Collaborate with business stakeholders to understand reporting needs and translate those into actionable insights. 3. Technical Leadership: a. Act as a technical authority within the team, providing guidance on data engineering principles, Azure platform, and Power BI tools. b. Design, architect, and implement scalable data pipelines using Azure Data Factory, Azure Synapse Analytics, and other relevant technologies. c. Ensure adherence to data governance standards and regulations, such as GDPR, HIPAA, etc. d. Implement robust monitoring and alerting mechanisms to detect and resolve issues proactively. 4. Team Management: a. Oversee and manage a team of data engineers, ensuring they meet project deadlines and deliver high-quality work. b. Develop and implement team guidelines, policies, and procedures to enhance productivity and performance. c. Mentor and coach team members to improve their skills and career development. d. Conduct regular one-on-one meetings to discuss progress, address concerns, and set goals. Preferred Skills: • Experience with DevOps practices, including CI/CD pipelines and automation tools like Azure DevOps. Qualifications: • Bachelor’s degree in Computer Science, Information Technology, or related field. • Proficient in Azure Data Engineering services, including Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. • Expertise in designing and implementing Power BI reports and dashboards. • Strong understanding of data architecture, data modelling, and data governance principles. • Experience working with large datasets and complex data transformation processes. • Excellent communication and interpersonal skills, with the ability to collaborate effectively across departments. • Ability to manage multiple priorities and work under tight deadlines. • Professional certification in Azure Data Engineer or equivalent, preferred but not required. Job Description: Experience Required: Minimum 8 years of relevant experience in data engineering, specifically with Power BI and Azure Data Engineering Key Responsibilities: 1. Data Engineering: a. Develop and maintain ETL processes using Azure Data Factory. b. Design and implement data warehouses using Azure Synapse Analytics. c. Optimize data storage and retrieval strategies to ensure efficient use of resources and fast query performance. d. Implement data quality checks and validation processes to ensure accuracy and reliability of data. 2. Power BI Reporting: a. Create compelling and interactive reports and dashboards using Power BI Desktop. b. Design and implement Power BI data models that efficiently integrate with various data sources. c. Automate report delivery and scheduling using Power Automate or similar tools. d. Collaborate with business stakeholders to understand reporting needs and translate those into actionable insights. 3. Technical Leadership: a. Act as a technical authority within the team, providing guidance on data engineering principles, Azure platform, and Power BI tools. b. Design, architect, and implement scalable data pipelines using Azure Data Factory, Azure Synapse Analytics, and other relevant technologies. c. Ensure adherence to data governance standards and regulations, such as GDPR, HIPAA, etc. d. Implement robust monitoring and alerting mechanisms to detect and resolve issues proactively. 4. Team Management: a. Oversee and manage a team of data engineers, ensuring they meet project deadlines and deliver high-quality work. b. Develop and implement team guidelines, policies, and procedures to enhance productivity and performance. c. Mentor and coach team members to improve their skills and career development. d. Conduct regular one-on-one meetings to discuss progress, address concerns, and set goals. Preferred Skills: • Experience with DevOps practices, including CI/CD pipelines and automation tools like Azure DevOps. Qualifications: • Bachelor’s degree in Computer Science, Information Technology, or related field. • Proficient in Azure Data Engineering services, including Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. • Expertise in designing and implementing Power BI reports and dashboards. • Strong understanding of data architecture, data modelling, and data governance principles. • Experience working with large datasets and complex data transformation processes. • Excellent communication and interpersonal skills, with the ability to collaborate effectively across departments. • Ability to manage multiple priorities and work under tight deadlines. • Professional certification in Azure Data Engineer or equivalent, preferred but not required.
Posted 15 hours ago
5.0 years
12 - 25 Lacs
Pune, Maharashtra, India
On-site
Job Title: Sr Software Engineer - Products Location : Pune About Improzo At Improzo (Improve + Zoe; meaning Life in Greek), we believe in improving life by empowering our customers. Founded by seasoned Industry leaders, we are laser focused for delivering quality-led commercial analytical solutions to our clients. Our dedicated team of experts in commercial data, technology, and operations has been evolving and learning together since our inception. Here, you won't find yourself confined to a cubicle; instead, you'll be navigating open waters, collaborating with brilliant minds to shape the future. You will work with leading Life Sciences clients, seasoned leaders and carefully chosen peers like you! People are at the heart of our success, so we have defined our CARE values framework with a lot of effort, and we use it as our guiding light in everything we do. We CARE! Customer-Centric: Client success is our success. Prioritize customer needs and outcomes in every action. Adaptive: Agile and Innovative, with a growth mindset. Pursue bold and disruptive avenues that push the boundaries of possibilities. Respect: Deep respect for our clients & colleagues. Foster a culture of collaboration and act with honesty, transparency, and ethical responsibility. Execution: Laser focused on quality-led execution; we deliver! Strive for the highest quality in our services, solutions, and customer experiences. About The Role We are seeking a highly skilled and motivated full-stack Sr. Python Product Engineer to join our team and play a pivotal role in the development of our next-generation Analytics Platform for the Life Sciences industry . This platform, featuring a suite of innovative AI-Apps, helps users solve critical problems across the life sciences value chain, from product launch and brand management to salesforce optimization. As a Senior Engineer, you will be a key contributor, responsible for designing, building, and deploying the core components of the platform. You will blend your deep expertise in full-stack Python development, data engineering, and AI/ML to create a scalable and impactful product that delivers actionable insights. Key Responsibilities Design and deliver a modern, AI-first analytical applications platform using Python, leveraging frameworks like Django or Flask. Design, develop, test, deploy, and maintain robust, scalable, and efficient software applications using Python. Develop and implement server-side logic, integrating user-facing elements developed by front-end developers. Design and implement data storage solutions, working with various databases (SQL and NoSQL). Develop and integrate APIs (RESTful, GraphQL) and other third-party services. Optimize applications for maximum speed, scalability, and security. Participate in the entire software development life cycle (SDLC), from requirements gathering and analysis to deployment and post-launch support. Conduct code reviews, provide constructive feedback, and mentor junior developers. Troubleshoot, debug, and resolve complex software defects and issues. Build scalable data pipelines and services, integrating technologies like Spark, Kafka, and Databricks/Snowflake, to handle large-scale life sciences datasets from sources like Iqvia and Veeva. Implement and manage CI/CD pipelines using tools like Jenkins or GitLab CI and containerization with Docker and Kubernetes to ensure high-quality and reliable deployments. Collaborate closely with product managers and architects to translate product vision into technical requirements and deliver high-quality, client-centric features. Integrate and operationalize advanced AI/ML models, including generative AI and agents built with Crew.Ai and Langchain, into the platform to power new applications. Ensure the platform provides robust capabilities for data exploration, analysis, visualization, and reporting, meeting the needs of our users. Uphold engineering best practices, conduct thorough code reviews, and champion a culture of technical excellence and continuous improvement. Qualifications Bachelor's or Master's degree in Computer Science or a related technical field. 5+ years of hands-on experience in fullstack python product development, building and scaling complex applications in a product-focused environment. Past experience leveraging Java and .NET is desired. Expert proficiency in Python for backend development, with extensive experience in Django including the ORM, migrations, and the Django REST Framework (DRF). In-depth knowledge and experience with python core principles, including object-oriented programming (OOP), data structures, and algorithms. Experience with big-data ecosystem for data processing, analysis, backend development: e.g., Flask/Django, Sql/NoSql, Spark, AirByte/Databricks/Snowflake, Spark, Kafka, Hadoop, etc. Strong experience with big-data technologies such as Spark, AirByte, Databricks, Snowflake, relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra) Solid experience with front-end technologies like React or Angular. Hands-on experience with cloud-based platforms (AWS preferred), including services for compute, storage, and databases. Proven experience with CI/CD tools (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and logging/monitoring tools (Grafana, Prometheus). Experience with advanced analytics, including integrating AI/ML models into production applications. Experience with testing frameworks (e.g., Pytest, Unittest) and a commitment to writing unit and integration tests. Knowledge of the life sciences and biopharma industry, including commercial datasets and compliance requirements (HIPAA, CCPA), is highly desirable. Excellent problem-solving, communication, and collaboration skills. Attention to details, biased for quality and client centricity. Ability to work independently and as part of a cross-functional team. Strong leadership, mentoring, and coaching skills. Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge Analytics projects, transforming the life sciences industry Collaborative and supportive work environment. Opportunities for professional development and growth. Skills: sql,restful apis,python,databricks,spark,data engineering,front-end technologies (react, angular),django,product development,kafka,docker,ci/cd (jenkins, gitlab ci),flask,kubernetes,nosql,ai/ml integration,snowflake,aws,graphql
Posted 16 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Hi Role - Azure Data Engineer Location - Chennai, Gurugram (Onsite 3days in a week) Shift Timing - 2pm to 11PM Experience - 3+ Notice Period - Immediate or 15days Notice period (Please don't apply More than 30days) Required Skills and Qualifications: Educational Background: Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. Certifications in Databricks, Azure, or related technologies are a plus. Technical Skills: o Proficiency in SQL for complex queries, database design, and optimization. o Strong experience with PySpark for data transformation and processing. o Hands-on experience with Databricks for building and managing big data solutions. o Familiarity with cloud platforms like Azure INNOVATION STARTS HERE o Knowledge of data warehousing concepts and tools (e.g., Snowflake, Redshift). o Experience with data versioning and orchestration tools like Git, Airflow, or Dagster. Solid understanding of Big Data ecosystems (Hadoop, Hive, etc.).
Posted 16 hours ago
5.0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Your role will be pivotal in driving the success of application initiatives and fostering a collaborative environment. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate training and development opportunities for team members to enhance their skills. - Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data integration and ETL processes. - Experience with cloud computing platforms and services. - Familiarity with data governance and compliance standards. - Ability to analyze and interpret complex data sets. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required.
Posted 16 hours ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a Platform Architect with expertise in Informatica PowerCenter and Informatica Intelligent Cloud Services (IICS) to design, implement, and optimize enterprise-level data integration platforms. The ideal candidate will have a strong background in ETL/ELT architecture, cloud data integration, and platform modernization, ensuring scalability, security, and performance across on-prem and cloud environments. Responsibilities Platform Engineering & Administration Oversee installation, configuration, and optimization of PowerCenter and IICS environments. Manage platform scalability, performance tuning, and troubleshooting. Implement data governance, security, and compliance (e.g., role-based access, encryption, data lineage tracking). Optimize connectivity and integrations with various sources (databases, APIs, cloud storage, SaaS apps). Cloud & Modernization Initiatives Architect and implement IICS-based data pipelines for real-time and batch processing. Migrate existing PowerCenter workflows to IICS, leveraging serverless and cloud-native features. Ensure seamless integration with cloud platforms (AWS, Azure, GCP) and modern data lakes/warehouses (Snowflake, Redshift, BigQuery). Qualifications 4 years of experience in data integration and ETL/ELT architecture. Expert-level knowledge of Informatica PowerCenter and IICS (Cloud Data Integration, API & Application Integration, Data Quality). Hands-on experience with cloud platforms (AWS, Azure, GCP) and modern data platforms (Snowflake, Databricks, Redshift, BigQuery). Strong SQL, database tuning, and performance optimization skills. Deep understanding of data governance, security, and compliance best practices. Experience in automation, DevOps (CI/CD), and Infrastructure-as-Code (IaC) tools for data platforms. Excellent communication, leadership, and stakeholder management skills. Preferred Qualifications Informatica certifications (IICS, PowerCenter, Data Governance). Proficient to Power Center to IDMC Conversions Understanding on real-time streaming (Kafka, Spark Streaming). Knowledge of API-based integration and event-driven architectures. Familiarity with Machine Learning and AI-driven data processing.
Posted 16 hours ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Role: QA Automation (QA) Location: India/Remote Need Immediate Joiner Job Overview Responsibilities Design and implement automated testing frameworks for data pipelines and infrastructure. Develop test scripts for Databricks, Snowflake, and Airflow-based systems. Execute and monitor automated tests to ensure system reliability. Collaborate with developers to identify and resolve defects. Maintain and update test cases based on project requirements. Document test results and quality metrics. Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of experience in QA automation or software testing. Proficiency in automation tools (e.g., Selenium, TestNG, or similar). Experience with testing data pipelines or cloud-based systems. Strong scripting skills in Python or Java. Must be located in India and eligible to work. Preferred Skills Experience in telecommunications or RAN data systems. Familiarity with Databricks, Snowflake, or Airflow. Knowledge of CI/CD pipelines and Git. Certifications in QA or automation testing (e.g., ISTQB). Please share your resume at Akhila.kadudhuri@programmers.io with current CTC, expected CTC, and notice period.
Posted 17 hours ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the role We’re looking for Senior Engineering Manager to lead our Data / AI Platform and MLOps teams at slice. In this role, you’ll be responsible for building and scaling a high-performing team that powers data infrastructure, real-time streaming, ML enablement, and data accessibility across the company. You'll partner closely with ML, product, platform, and analytics stakeholders to build robust systems that deliver high-quality, reliable data at scale. You will drive AI initiatives to centrally build AP platform and apps which can be leveraged by various functions like legal, CX, product in a secured manner This is a hands-on leadership role perfect for someone who enjoys solving deep technical problems while growing people and teams. What You Will Do Lead and grow the data platform pod focused on all aspects of data (batch + real-time processing, ML platform, AI tooling, Business reporting, Data products – enabling product experience through data) Maintain hands-on technical leadership - lead by example through code reviews, architecture decisions, and direct technical contribution Partner closely with product and business stakeholders to identify data-driven opportunities and translate business requirements into scalable data solutions Own the technical roadmap for our data platform including infra modernization, performance, scalability, and cost efficiency Drive the development of internal data products like self-serve data access, centralized query layers, and feature stores Build and scale ML infrastructure with MLOps best practices including automated pipelines, model monitoring, and real-time inference systems Lead AI platform development for hosting LLMs, building secure AI applications, and enabling self-service AI capabilities across the organization Implement enterprise AI governance including model security, access controls, and compliance frameworks for internal AI applications Collaborate with engineering leaders across backend, ML, and security to align on long-term data architecture Establish and enforce best practices around data governance, access controls, and data quality Ensure regulatory compliance with GDPR, PCI-DSS, SOX through automated compliance monitoring and secure data pipelines Implement real-time data processing for fraud detection and risk management with end-to-end encryption and audit trails Coach engineers and team leads through regular 1:1s, feedback, and performance conversations What You Will Need 10+ years of engineering experience, including 2+ years managing data or infra teams with proven hands-on technical leadership Strong stakeholder management skills with experience translating business requirements into data solutions and identifying product enhancement opportunities Strong technical background in data platforms, cloud infrastructure (preferably AWS), and distributed systems Experience with tools like Apache Spark, Flink, EMR, Airflow, Trino/Presto, Kafka, and Kubeflow/Ray plus modern stack: dbt, Databricks, Snowflake, Terraform Hands on experience building AI/ML platforms including MLOps tools and experience with LLM hosting, model serving, and secure AI application development Proven experience improving performance, cost, and observability in large-scale data systems Expert-level cloud platform knowledge with container orchestration (Kubernetes, Docker) and Infrastructure-as-Code Experience with real-time streaming architectures (Kafka, Redpanda, Kinesis) Understanding of AI/ML frameworks (TensorFlow, PyTorch), LLM hosting platforms, and secure AI application development patterns Comfort working in fast-paced, product-led environments with ability to balance innovation and regulatory constraints Bonus: Experience with data security and compliance (PII/PCI handling), LLM infrastructure, and fintech regulations Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependents. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.
Posted 18 hours ago
10.0 years
0 Lacs
India
Remote
Outbound Sales Executive Location: India Full Time - Remote ***DO NOT APPLY if you don't have selling Analytics Professional Services to Enterprise B2B Clients **** What You'll Bring Proven Sales Expertise: At least 10+ years of experience in sales development, cold calling, and outbound messaging. Professional Services Background: Deep experience working with GCP / Snowflake and/or Databricks and a strong history of selling professional services for a non-product company. Tech-Savvy: Proficiency with sales tools like CRMs, prospecting tools (e.g., ZoomInfo), and communication platforms (e.g., Slack, LinkedIn). Influential Communicator: Exceptional written and verbal communication skills with the ability to build rapport and influence decisions, including with C-level executives. Results-Oriented: A demonstrated ability to track and measure activity, outcomes, and goals, with a keen eye for detail. Strategic & Agile: Excellent analytical skills with the ability to navigate ambiguity, prioritize effectively, and collaborate across organizations and with external stakeholders. Customer-Centric: A strong focus on customer satisfaction and a willingness to address escalated client issues with speed and urgency. Education: A Bachelor's degree in a related field or equivalent work experience. An MBA is highly desirable. Travel: Willingness and ability to travel regionally (expected at least 50%). What You'll Do As an Outbound Sales Account Executive specializing in our GCP / Snowflake & Databricks Practice, you'll be a key driver of our growth. Your mission is to build and nurture relationships with new prospects, creating a robust pipeline of opportunities for our team. You'll be the first point of contact for leaders at target organizations, introducing them to our innovative Analytics Professional Services and Solutions. Strategize & Prospect: Creatively identify and engage with leaders at target organizations through cold calls, LinkedIn, email, and other outbound channels. Build Relationships: Nurture relationships with prospects, providing relevant insights into their technology footprint and strategic goals. You'll transition these relationships into qualified opportunities. Collaborate & Support: Work closely with our Enterprise Account Executives and internal GTM teams to refine outreach strategies, ensure smooth handoffs, and provide ongoing support throughout the sales process. Drive Account Growth: Actively lead account strategy to generate and develop business growth opportunities with new customers, collaborating with our alliance partners. Measure & Track: Maintain detailed, up-to-date records of all activities, outcomes, and goals in our CRM, ensuring a clear view of your progress. Stay Ahead of the Curve: Continuously expand your knowledge of cloud and trends to offer relevant and insightful information that resonates with potential customers.
Posted 19 hours ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Title: Product Owner / Subject Matter Expert (AI & Data) Experience Required: 10+ years Location: The selected candidate is required to work onsite for the initial 1 to 3-month project training and execution period at either our Kovilpatti or Chennai location, which will be confirmed during the onboarding process. After the initial period, remote work opportunities will be offered. Job Description: The Product Owner / Subject Matter Expert (AI & Data) will lead the definition, prioritization, and successful delivery of intelligent, data-driven products by aligning business needs with AI/ML and data platform capabilities. Acting as a bridge between stakeholders, data engineering teams, and AI developers, this role ensures that business goals are translated into actionable technical requirements. The candidate will manage product backlogs, define epics and features, and guide cross-functional teams throughout the product development lifecycle. They will play a crucial role in driving innovation, ensuring data governance, and realizing value through AI-enhanced digital solutions. Key Responsibilities: Define and manage the product roadmap across AI and data domains based on business strategy and stakeholder input. Translate business needs into technical requirements, user stories, and use cases for AI and data-driven applications. Collaborate with data scientists, AI engineers, and data engineers to prioritize features, define MVPs, and validate solution feasibility. Lead backlog refinement, sprint planning, and iteration reviews across multidisciplinary teams. Drive the adoption of AI models (e.g., LLMs, classification, prediction, recommendation) and data pipelines that support operational goals. Ensure inclusion of data governance, lineage, and compliance requirements in product development. Engage with business units to define KPIs and success metrics for AI and analytics products. Document product artifacts such as PRDs, feature definitions, data mappings, model selection criteria, and risk registers. Facilitate workshops, stakeholder demos, and solution walkthroughs to ensure ongoing alignment. Support responsible AI practices and secure data sharing standards. Technical Skills: Product Management Tools: Azure DevOps, Jira, Confluence AI/ML Concepts: LLMs, NLP, predictive analytics, computer vision, generative AI AI Tools: OpenAI, Azure OpenAI, MLflow, LangChain, prompt engineering Data Platforms: Azure Data Factory, Databricks, Synapse Analytics, Purview, SQL, NoSQL Data Governance: Metadata management, data lineage, PII handling, classification standards Documentation: PRDs, data dictionaries, process flows, KPI dashboards Methodologies: Agile/Scrum, backlog management, MVP delivery Qualification: Bachelors or Master’s in Computer Science, Data Science, Information Systems, or a related field. Preferred Certifications: Microsoft Certified (Azure AI Engineer Associate / Azure Data Fundamentals / Azure Data Engineer Associate). 10+ years of experience in product ownership, business analysis, or solution delivery in AI and data-centric environments. Proven success in delivering AI-enabled products and scalable data platforms. Strong communication, stakeholder facilitation, and technical documentation skills.
Posted 19 hours ago
5.0 years
0 Lacs
India
On-site
Coursera was launched in 2012 by Andrew Ng and Daphne Koller with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 183 million registered learners as of June 30, 2025 . Coursera partners with over 350 leading university and industry partners to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. Job Overview: We are seeking a highly skilled and collaborative Data Scientist to join our team. Reporting to the Director of Data Science, you will work alongside and provide technical guidance to a subset of our Analytics and Insights group and provide support to a few business lines including Industry and University Partnerships, and Content/Credentials supporting Product, Marketing, Content, Finance, Services, and more. As a Data Scientist, you will influence strategies and roadmaps for business units within your purview through actionable insights. Your responsibilities will include forecasting content performance, informing content acquisition and prescribing improvements, addressing A/B testing setups and reporting, answering ad-hoc business questions, defining metrics and goals, building and managing dashboards, causal inference and ML modeling, supporting business event tracking and unification, and more. The ideal candidate will be a creative and collaborative Data Scientist who can proactively drive results in their areas of focus, and provide guidance and best practices around statistical modeling and experimentation, data analysis, and data quality. Responsibilities: As a Data Scientist, you will assume responsibility for guiding the planning, assessment, and execution of our content initiatives and the subsequent engagement and learner and student success. You will play a key role in identifying gaps in content and proposing ways to acquire new, targeted content or enhance existing content, in addition to creating and leveraging forecasts of content pre and post launch, defining KPIs and creating reports for measuring the impact of diverse tests, product improvements, and content releases. In this position, you will guide other Data Scientists and provide technical feedback throughout various stages of project development, including optimizing analytical models, reviewing code, creating dashboards, and conducting experimentation. You will examine and prioritize projects and provide leadership with insightful feedback on results. Your role will require that you analyze the connection between our consumer-focused business and our learner’s pathway to a degree, and optimize systems to effectively lead to the optimal outcome for our learners. In this position, you will collaborate closely with the data engineering and operations teams to ensure that we have the right content at the right time for our learners. You will also analyze prospect behavior to identify content acquisition and optimization opportunities that promote global growth in the Consumer and Degrees business, and assist content, marketing, and product managers in developing and measuring growth initiatives; resulting in more lives changed through learning. In this role, you’ll be directly involved in the planning, measurement, and evaluation of content , engagement, and customer success initiatives and experiments. Proactively identify gaps in content availability and recommend targeted content acquisition or improvement to existing content. Define and develop KPIs and create reports to measure impact of various tests, content releases, product improvements, etc. Mentor Data Scientists and offer technical guidance on projects; dashboard creation, troubleshooting code, statistical modeling, experimentation, and general analysis optimization. Run exploratory analyses, uncover new areas of opportunity, create and test hypotheses, develop dashboards, and assess the potential upside of a given opportunity. Work closely with our data engineering and operations teams to ensure that content funnel tracking supports product needs and maintain self-serve reporting tools. Advise content, marketing, and product managers on experimentation and measurement plans for key growth initiatives Basic Qualifications: Background in applied math, computer science, statistics, or a related technical field 5+ years of experience using data to advise product or business teams 2+ years of experience applying statistical inference techniques to business questions Excellent business intuition, cross-functional communication, and project management Strong applied statistics and data visualization skills Proficient with at least one scripting language (e.g. Python), one statistical software package (e.g. R, NumPy/SciPy/Pandas), and SQL Preferred Qualifications: Experience at EdTech or Content Subscription business Experience partnering with SaaS sales and/or marketing organizations Experience working with Salesforce and/or Marketo data Experience with Airflow, Databricks and/or LookerExperience with Amplitude If this opportunity interests you, you might like these courses on Coursera: Go Beyond the Numbers: Translate Data into Insights Applied AI with DeepLearning Probability & Statistics for Machine Learning & Data Science Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here.
Posted 20 hours ago
7.0 years
0 Lacs
Gurgaon Rural, Haryana, India
On-site
Minimum of 7+ years of experience in the data analytics field. Proven experience with Azure/AWS Databricks in building and optimizing data pipelines, architectures, and datasets. Strong expertise in Scala or Python, PySpark, and SQL for data engineering tasks. Ability to troubleshoot and optimize complex queries on the Spark platform. Knowledge of structured and unstructured data design, modelling, access, and storage techniques. Experience designing and deploying data applications on cloud platforms such as Azure or AWS. Hands-on experience in performance tuning and optimizing code running in Databricks environments. Strong analytical and problem-solving skills, particularly within Big Data environments. Experience with Big Data management tools and technologies including Cloudera, Python, Hive, Scala, Data Warehouse, Data Lake, AWS, Azure. Technical and Professional Skills: Must Have: Excellent communication skills with the ability to interact directly with customers. Azure/AWS Databricks. Python / Scala / Spark / PySpark. Strong SQL and RDBMS expertise. HIVE / HBase / Impala / Parquet. Sqoop, Kafka, Flume. Airflow.
Posted 20 hours ago
0 years
0 Lacs
India
Remote
Role: Support Specialist L3 Location: India About the Operations Team : Includes the activities, processes and practices involved in managing and maintaining the operational aspects of an organization’s IT infrastructure and systems. It focuses on ensuring the smooth and reliable operation of IT services, infrastructure components and supporting systems in the Data & Analytics area. Duties Description: Provide expert service support as L3 specialist for the service. Identify, analyze, and develop solutions for complex incidents or problems raised by stakeholders and clients as needed. Analyze issues and develop tools and/or solutions that will help enable business continuity and mitigate business impact. Proactive and timely update assigned tasks, provide response and solution within agreed team's timelines. Problem corrective action plan proposals. Deploying bug-fixes in managed applications. Gather requirements, analyze, design and implement complex visualization solutions Participate in internal knowledge sharing, collaboration activities, and service improvement initiatives. Tasks may include monitoring, incident/problem resolution, documentations, automation, assessment and implementation/deployment of change requests. Provide technical feedback and mentoring to teammates Requirements: Willing to work either ASIA, EMEA, or NALA shift Strong problem-solving, analytical, and critical thinking skills. Strong communication skillset – ability to translate technical details to business/non-technical stakeholders Extensive experience with SQL, T-SQL, PL/SQL Language – includes but not limited to ETL, merge, partition exchange, exception and error handling, performance tuning. Experience with Python/Pyspark mainly with Pandas, Numpy, Pathlib and PySpark SQL Functions Experience with Azure Fundamentals, particularly Azure Blob Storage (File Systems and AzCopy). Experience with Azure Data Services - Databricks and Data Factory Understands the operation of ETL process, triggers and scheduler Logging, dbutils, pyspark SQL functions, handling different files e.g. json Experience with Git repository maintenance and DevOps concepts. Familiarity with building, testing, and deploying process. Nice to have: Experience with Control-M (if no experience, required to learn on the job) KNIME Power BI Willing to be cross-trained to all of the technologies involved in the solution We offer: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. “Office as an option” model. You can choose to work remotely or in the office. Flexibility regarding working hours and your preferred form of contract. Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment. If you are interested in this position, please apply on the link given below. Application Link
Posted 21 hours ago
8.0 years
0 Lacs
India
Remote
Job Description As an AWS Solution Architect, your responsibilities are to: Design and implement scalable, secure, and cost-efficient solutions on AWS for diverse data and AI use cases. Document solution architectures and technical approaches for stakeholders at various levels. Present and explain technical solutions to business users, focusing on business value, cost, and efficiency. Participate in architecture forums to present, defend, and refine technical designs with peers. Act as a solution advisor—guiding clients beyond stated requirements to more effective alternatives. Contribute to high-level project documentation such as Statements of Work (SoWs) and development roadmaps. Profile Requirements For this position of AWS Solution Architect, we are looking for someone with: 8+ years of overall IT experience, with at least 5 years of solid experience in AWS, especially in designing solutions, documenting, and presenting Broad understanding of other data analytics and AI technologies and platforms such as Databricks, Snowflake, Azure, GCP, etc. Understanding is sufficient; experience is not required Ability to not only provide the solution the client wants but also advise on the best solution Ability to present and explain technical solutions to business users (e.g., business impact, costs, efficiency, etc.) Ability to present and explain technical solutions in an architects’ forum (e.g., deep dive on details, ready to be challenged by other architects) Experience in preparing Statements of Work and high-level project development plans (not at a project manager level) Excellent communication skills Relevant certifications such as AWS Certified Solutions Architect – Professional or Associate are preferred Adastra Culture Manifesto Servant Leadership Managers are servants to employees. Managers are elected to make sure that employees have all the processes, resources, and information they need to provide services to clients in an efficient manner. Any manager up to the CEO is visible and reachable for a chat regardless their title. Decisions are taken with a consent in an agile manner and executed efficiently in no overdue time. We accept that wrong decisions happen, and we appreciate the learning before we adjust the process for a continuous improvement. Employees serve clients. Employees listen attentively to client needs and collaborate internally as a team to cater to them. Managers and employees work together to get things done and are accountable to each other. Corporate KPIs are transparently reviewed on monthly company events with all employees. Performance Driven Compensation We recognize and accept that some of us are more ambitious, more gifted, or more hard-working. We also recognize that some of us look for a stable income and lesser hassle at a different stage of their careers. There is a place for everyone, we embrace and need this diversity. Grades in our company are not based on number of years of experience, they are value driven based on everyone’s ability to deliver independently their work to clients and/or lead others. There is no “anniversary/annual” bonus, we distribute bonuses on a monthly recurring basis as an instant gratification for performance and this bonus is practically unlimited. There is no “annual indexation” of salaries, you may be upgraded several times within the year, or none, based on your own pace of progress, ambitions, relevant skillset and recognition by clients. Work-Life Integration We challenge the notion of work-life balance, we embrace the notion of work-life integration instead. This philosophy looks into our lives as a single whole where we serve ourselves, our families and our clients in an integrated manner. We encourage 100% flexible working hours where you arrange your day. This means you are free when you have little work, but this also means extra effort if you are behind schedule. Working on a Western project also means nobody bothers you during the whole day but you may have to jump on a scrum call in the evening to talk to your team overseas. We appreciate time and we minimize time spent on Adastra meetings. We are also a remote-first company. While we have our collaboration offices and social events, we encourage people to work 100% remote from home whenever possible. This means saving time and money on commute, staying home with elderly and little ones, not missing the special moments in life. This also means you can work from any of our other offices in Europe, North America or Australia, or move to a place with lower cost of living without impacting your income. We trust you by default until you fail our trust. Global Diversity Adastra Thailand is an international organization. We hire globally and our biggest partners and clients are in Europe, North America and Australia. We work on teams with individuals from different culture, ethnicity, sexual preference, political views or religion. We have zero tolerance to anyone who doesn’t pay respect to others or is abusive in any way. We speak different languages to one another, but we speak English when we are together or with clients. Our company is a safe space where communication is encouraged but boundaries regarding sensitive topics are respected. We accept and converge together to serve our teams and clients and ultimately have good time at work. Lifelong Learning On annual average we invest 25% of our working hours to personal development and upskilling outside project work, regardless of seniority or role. We feature more than 400 courses on our Training Repo and we continue to actively purchase or tailor hands-on content. We certify people on our expense. We like to say we are technology agnostic; we learn the principles of data management and we apply it on different use cases and different technology stacks. We believe that the juniors today are the seniors tomorrow, we treat everyone with respect and mentor them into the roles they deserve. We encourage seniors to give back to the IT community through leadership and mentorship. On your last day with us we may give you an open dated job offer so that you feel welcome to return home as others did before you. FRAUD ALERT: Be cautious of fake job postings and individuals posing as Adastra employees. HOW TO VERIFY IT'S US: Our employees will only use email addresses ending in @adastragrp.com. Any other domains, even if similar, are not legitimate. We will never request any form of payment, including but not limited to fees, certification costs, or deposits. Please reach out to HRIN@adastragrp.com only if you have any questions. More About Adastra: Visit Adastra (adastracorp.com) and/or contact us: at HRIN@adastragrp.com
Posted 21 hours ago
9.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Experience : 9+ years Architect experience with Azure or AWS and Databricks Notice : Immediate Preferred Job Location: Noida , Mumbai , Pune , Bangalore , Gurgaon , Kochi (Hybrid Work) Job Description: • Develop a detailed project plan outlining tasks, timelines, milestones, and dependencies. • Solutions architecture design and implementation • Understand the source and outline the ADF structure. Design and schedule packages using ADF. • Foster collaboration and communication within the team to ensure smooth workflow. • Application performance optimization. • Monitor and manage the allocation of resources to ensure tasks are adequately staffed. • Created detailed technical specification, business requirements and unit test report documents. • Ensure that the project adheres to best practices, coding standards, and technical requirements. • Collaborating with technical leads to address technical issues and mitigate risks.
Posted 21 hours ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.
Posted 21 hours ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). Take part in evaluation of new data tools, POCs and provide suggestions. Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM
Posted 22 hours ago
4.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM
Posted 22 hours ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experince Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). Take part in evaluation of new data tools, POCs and provide suggestions. Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM
Posted 22 hours ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
A Career at HARMAN - Harman Tech Solutions (HTS) You will be part of a global, multi-disciplinary team dedicated to harnessing the power of technology and shaping the future. At HARMAN HTS, your role involves solving challenges through creative and innovative solutions that combine physical and digital elements to address various needs. Your responsibilities will include: - Developing and executing test scripts to validate data pipelines, transformations, and integrations. - Formulating and maintaining test strategies, including smoke, performance, functional, and regression testing, to ensure data processing and ETL jobs align with requirements. - Collaborating with development teams to assess changes in data workflows, updating test cases as needed to maintain data integrity. - Designing and implementing tests for data validation, storage, and retrieval using Azure services like Data Lake, Synapse, and Data Factory. - Continuously enhancing automated tests to ensure timely delivery of new features per defined quality standards. - Participating in data reconciliation and verifying Data Quality frameworks to uphold data accuracy, completeness, and consistency. - Sharing knowledge and best practices by documenting testing processes and findings in collaboration with business analysts and technology teams. - Communicating testing progress effectively with stakeholders, addressing issues or blockers, and ensuring alignment with business objectives. - Maintaining a comprehensive understanding of the Azure Data Lake platform's data landscape to ensure thorough testing coverage. Requirements: - 3-6 years of QA experience with a strong focus on Big Data testing, particularly in Data Lake environments on Azure's cloud platform. - Proficiency in Azure Data Factory, Azure Synapse Analytics, and Databricks for big data processing and scaled data quality checks. - Strong SQL skills for writing and optimizing simple and complex queries for data validation and testing. - Proficient in PySpark, with experience in data manipulation, transformation, and executing test scripts for data processing and validation. - Hands-on experience with Functional & system integration testing in big data environments, ensuring seamless data flow and accuracy across systems. - Knowledge and ability to design and execute test cases in a behavior-driven development environment. - Fluency in Agile methodologies, active participation in Scrum ceremonies, and a strong understanding of Agile principles. - Familiarity with tools like Jira, including experience with X-Ray or Jira Zephyr for defect management and test case management. - Proven experience working on high-traffic and large-scale software products, ensuring data quality, reliability, and performance under demanding conditions. What We Offer: - Access to employee discounts on HARMAN/Samsung products. - Professional development opportunities through HARMAN University. - Flexible work schedule promoting work-life integration and collaboration in a global environment. - Inclusive and diverse work environment fostering professional and personal development. - Tuition reimbursement. - Be Brilliant employee recognition and rewards program. You Belong Here: HARMAN values every employee and encourages sharing ideas and perspectives within a supportive culture that celebrates uniqueness. Continuous learning and development opportunities are offered to empower you in shaping your career. About HARMAN: HARMAN has a rich legacy of innovation, amplifying the sense of sound since the 1920s. We create integrated technology platforms that enhance safety, connectivity, and intelligence across automotive, lifestyle, and digital transformation solutions. By exceeding engineering and design standards, we offer extraordinary experiences under iconic brands like JBL, Mark Levinson, and Revel. Join us in innovating and making a lasting impact. Important Notice: Beware of recruitment scams.,
Posted 23 hours ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Solution Designer (Cloud Data Integration) at Barclays within the Customer Digital and Data Business Area, you will play a vital role in supporting the successful delivery of location strategy projects. Your responsibilities will include ensuring projects are delivered according to plan, budget, quality standards, and governance protocols. By spearheading the evolution of the digital landscape, you will drive innovation and excellence, utilizing cutting-edge technology to enhance our digital offerings and deliver unparalleled customer experiences. To excel in this role, you should possess hands-on experience working with large-scale data platforms and developing cloud solutions within the AWS data platform. Your track record should demonstrate a history of driving business success through your expertise in AWS, distributed computing paradigms, and designing data ingestion programs using technologies like Glue, Lambda, S3, Redshift, Snowflake, Apache Kafka, and Spark Streaming. Proficiency in Python, PySpark, SQL, and database management systems is essential, along with a strong understanding of data governance principles and tools. Additionally, valued skills for this role may include experience in multi-cloud solution design, data modeling, data governance frameworks, agile methodologies, project management tools, business analysis, and product ownership within a data analytics context. A basic understanding of the banking domain, along with excellent analytical, communication, and interpersonal skills, will be crucial for success in this position. Your main purpose as a Solution Designer will involve designing, developing, and implementing solutions to complex business problems by collaborating with stakeholders to understand their needs and requirements. You will be accountable for designing solutions that balance technology risks against business delivery, driving consistency and aligning with modern software engineering practices and automated delivery tooling. Furthermore, you will be expected to provide impact assessments, fault finding support, and architecture inputs required to comply with the bank's governance processes. As an Assistant Vice President in this role, you will be responsible for advising on decision-making processes, contributing to policy development, and ensuring operational effectiveness. If the position involves leadership responsibilities, you will lead a team to deliver impactful work and set objectives for employees while demonstrating leadership behaviours focused on listening, inspiring, aligning, and developing others. Alternatively, as an individual contributor, you will lead collaborative assignments, guide team members, identify new directions for projects, consult on complex issues, and collaborate with other areas to support business activities. All colleagues at Barclays are expected to embody the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as the Barclays Mindset to Empower, Challenge, and Drive. By demonstrating these values and mindset, you will contribute to creating an environment where colleagues can thrive and deliver consistently excellent results.,
Posted 23 hours ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
Are you passionate about the intersection of data, technology, and science, and excited by the potential of Real-World Data (RWD) and AI Do you thrive in collaborative environments and aspire to contribute to the discovery of groundbreaking medical insights If so, join the data42 team at Novartis! At Novartis, we reimagine medicine by leveraging state-of-the-art analytics and our extensive internal and external data resources. Our data42 platform grants access to high-quality, multi-modal preclinical and clinical data, along with RWD, creating the optimal environment for developing advanced AI/ML models and generating health insights. Our global team of data scientists and engineers utilizes this platform to uncover novel insights and guide drug development decisions. As an RWD SME / RWE Execution Data Scientist, you will focus on executing innovative methodologies and AI models to mine RWD on the data42 platform. You will be the go-to authority for leveraging diverse RWD modalities patterns crucial to understanding patient populations, biomarkers, and drug targets, accelerating the development of life-changing medicines. Duties and Responsibilities: - Collaborate with R&D stakeholders to co-create and implement innovative, repeatable, scalable, and automatable data and technology solutions in line with the data42 strategy. - Be a data subject matter expert (SME), understand RWD of different modalities, vocabularies (LOINC, ICD, HCPCS, etc.), non-traditional RWD (Patient reported outcomes, Wearables and Mobile Health Data) and where and how they can be used. - Contribute to data strategy implementation such as Federated Learning, tokenization, data quality frameworks, regulatory requirements, conversion to common data models and standards, FAIR principles, and integration with enterprise catalog. - Define and execute advanced integrated and scalable analytical approaches and research methodologies using AI models for RWD analysis across the Research Development Commercial continuum. - Stay current with emerging applications and trends, driving the development of advanced analytic capabilities for data42 across the Real-world evidence generation lifecycle. - Demonstrate high agility working across various cross-located and cross-functional associates across business domains for priority disease areas to execute complex and critical business problems with quantified business impact. Ideal Candidate Profile: - PhD or MSc. in a quantitative discipline with proven expertise in AI/ML. - 8+ years of relevant experience in Data Science. - Extensive experience in Statistical and Machine Learning techniques. - Proficiency in tools and packages like Python, SQL, and exposure to dashboard or web-app building. - Knowledge in data standards e.g. OHDSI OMOP, FHIR HL7, and other data standards. - Strong collaboration skills with stakeholders in an individual contributor capacity. - High learning agility and adherence to updates in the industry and area of work. - Optional experience in Biomedical Research and development in pharma is a bonus. Join our Novartis Network: Not the right Novartis role for you Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up. Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If you need accommodation, please send an e-mail to [email protected] and let us know your request and contact information. Novartis is committed to building an outstanding, inclusive work environment and diverse teams representative of the patients and communities we serve.,
Posted 23 hours ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
You will collaborate with business, platform, and technology stakeholders to understand the scope of projects. Your role will involve performing comprehensive exploratory data analysis at various levels of granularity to derive inferences for further solutioning, experimentation, and evaluation. You will design, develop, and deploy robust enterprise AI solutions using Generative AI, NLP, machine learning, and other relevant technologies. It will be essential to continuously focus on providing business value while ensuring technical sustainability. Additionally, you will promote and drive adoption of cutting-edge data science and AI practices within the team while staying up to date on relevant technologies to propel the team forward. We are seeking a team player with 4-7 years of experience in the field of data science and AI. The ideal candidate will have proficiency in programming/querying languages like Python, SQL, PySpark, and familiarity with Azure cloud platform tools such as Databricks, ADF, Synapse, Web App, among others. You should possess strong work experience in text analytics, NLP, and Generative AI, showcasing a scientific and analytical thinking mindset comfortable with brainstorming and ideation. A deep interest in driving business outcomes through AI/ML is crucial, alongside a bachelor's or master's degree in engineering or computer science with or without a specialization in the field of AI/ML. Strong business acumen and the desire to collaborate with business teams to solve problems are highly valued. Prior understanding of the business domain of shipping and logistics is considered advantageous. Should you require any adjustments during the application and hiring process, we are happy to support you. For special assistance or accommodations to use our website, apply for a position, or perform a job, please contact us at accommodationrequests@maersk.com.,
Posted 23 hours ago
5.0 - 12.0 years
0 Lacs
noida, uttar pradesh
On-site
NTT DATA is looking for an AI Presales Solution Architect to join the team in Noida, Uttar Pradesh, India. As a seasoned professional with over 12 years of experience in Data and Analytics, including more than 5 years as a presales solutions architect, you will play a crucial role in driving the success of data projects. Your responsibilities will include end-to-end presales solutioning for data projects with a deal value exceeding $2 million. You should be well-versed in data and analytics technologies such as Data migration, ETL Migration, Data strategy, Cloud Data transformation, Estimation, and Pricing Strategy. Your ability to create value propositions, resource loading, estimations, and project delivery timelines will be essential in transforming client problems/needs into actionable deliverables that yield quick and long-term wins. In addition, you should have experience in coordinating with multiple teams to drive solutions and delivering compelling client presentations. Your expertise in working independently on Presales RFP/RFI in Data, analytics, and AI/GenAI areas, as well as your proficiency in hyperscale cloud platforms like Azure, AWS, or GCP, Databricks, Snowflake, and ETL tools, will be highly valued. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. As a Global Top Employer, we are dedicated to helping clients innovate, optimize, and transform for long-term success. With a diverse team of experts in over 50 countries and a robust partner ecosystem, we offer services in business and technology consulting, data and artificial intelligence, industry solutions, application development, infrastructure management, and connectivity. If you are a passionate and innovative individual seeking to be part of an inclusive and forward-thinking organization, we encourage you to apply now and be a part of our journey towards driving digital and AI transformation on a global scale. Visit us at us.nttdata.com to learn more about our commitment to innovation and sustainable growth.,
Posted 23 hours ago
12.0 - 16.0 years
0 Lacs
noida, uttar pradesh
On-site
Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers to levels they cannot achieve anywhere else. This is a world of more possibilities, more innovation, more openness in a cloud-enabled world. The Business & Industry Copilots group is a rapidly growing organization that is responsible for the Microsoft Dynamics 365 suite of products, Power Apps, Power Automate, Dataverse, AI Builder, Microsoft Industry Solution and more. Microsoft is considered one of the leaders in Software as a Service in the world of business applications and this organization is at the heart of how business applications are designed and delivered. This is an exciting time to join our group Customer Experience (CXP) and work on something highly strategic to Microsoft. The goal of CXP Engineering is to build the next generation of our applications running on Dynamics 365, AI, Copilot, and several other Microsoft cloud services to drive AI transformation across Marketing, Sales, Services and Support organizations within Microsoft. We innovate quickly and collaborate closely with our partners and customers in an agile, high-energy environment. Leveraging the scalability and value from Azure & Power Platform, we ensure our solutions are robust and efficient. Our organizations implementation acts as reference architecture for large companies and helps drive product capabilities. If the opportunity to collaborate with a diverse engineering team, on enabling end-to-end business scenarios using cutting-edge technologies and to solve challenging problems for large scale 24x7 business SaaS applications excite you, please come and talk to us! We are hiring a passionate Principal SW Engineering Manager to lead a team of highly motivated and talented software developers building highly scalable data platforms and deliver services and experiences for empowering Microsofts customer, seller and partner ecosystem to be successful. This is a unique opportunity to use your leadership skills and experience in building core technologies that will directly affect the future of Microsoft on the cloud. In this position, you will be part of a fun-loving, diverse team that seeks challenges, loves learning and values teamwork. You will collaborate with team members and partners to build high-quality and innovative data platforms with full stack data solutions using latest technologies in a dynamic and agile environment and have opportunities to anticipate future technical needs of the team and provide technical leadership to keep raising the bar for our competition. We use industry-standard technology: C#, JavaScript/Typescript, HTML5, ETL/ELT, Data warehousing, and/ or Business Intelligence Development. Responsibilities As a leader of the engineering team, you will be responsible for the following: - Build and lead a world class data engineering team. - Passionate about technology and obsessed about customer needs. - Champion data-driven decisions for features identification, prioritization and delivery. - Managing multiple projects, including timelines, customer interaction, feature tradeoffs, etc. - Delivering on an ambitious product and services roadmap, including building new services on top of vast amount data collected by our batch and near real time data engines. - Design and architect internet scale and reliable services. - Leveraging machine learning (ML) models knowledge to select appropriate solutions for business objectives. - Communicate effectively and build relationship with our partner teams and stakeholders. - Help shape our long-term architecture and technology choices across the full client and services stack. - Understand the talent needs of the team and help recruit new talent. - Mentoring and growing other engineers to bring in efficiency and better productivity. - Experiment with and recommend new technologies that simplify or improve the tech stack. - Work to help build an inclusive working environment. Qualifications Basic Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. - 12+ years of experience of building high scale enterprise Business Intelligence and data engineering solutions. - 3+ years of management experience leading a high-performance engineering team. - Proficient in designing and developing distributed systems on cloud platform. - Must be able to plan work, and work to a plan adapting as necessary in a rapidly evolving environment. - Experience using a variety of data stores, including data ETL/ELT, warehouses, RDBMS, in-memory caches, and document Databases. - Experience using ML, anomaly detection, predictive analysis, exploratory data analysis. - A strong understanding of the value of Data, data exploration and the benefits of a data-driven organizational culture. - Strong communication skills and proficiency with executive communications - Demonstrated ability to effectively lead and operate in cross-functional global organization Preferred Qualifications - Prior experience as an engineering site leader is a strong plus. - Proven success in recruiting and scaling engineering organizations effectively. - Demonstrated ability to provide technical leadership to teams, with experience managing large-scale data engineering projects. - Hands-on experience working with large data sets using tools such as SQL, Databricks, PySparkSQL, Synapse, Azure Data Factory, or similar technologies. - Expertise in one or more of the following areas: AI and Machine Learning. - Experience with Business Intelligence or data visualization tools, particularly Power BI, is highly beneficial,
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough