Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
1 - 5 Lacs
Noida
Work from Office
Req ID: 324014 We are currently seeking a Tableau Admin with AWS Experience to join our team in NOIDA, Uttar Pradesh (IN-UP), India (IN). Tableau Admin with AWS Experience We are seeking a skilled Tableau Administrator with experience in AWS to join our team. The ideal candidate will be responsible for managing and optimizing our Tableau Server environment hosted on AWS, ensuring efficient operation, data security, and seamless integration with other data sources and analytics tools. Key Responsibilities - Manage, configure, and administer Tableau Server on AWS, including setting up sites and managing user access and permissions. - Monitor server activity/performance, conduct regular system maintenance, and troubleshoot issues to ensure optimal performance and minimal downtime. - Collaborate with data engineers and analysts to optimize data sources and dashboard performance. - Implement and manage security protocols, ensuring compliance with data governance and privacy policies. - Automate monitoring and server management tasks using AWS and Tableau APIs. - Assist in the design and development of complex Tableau dashboards. Provide technical support and training to Tableau users. - Stay updated on the latest Tableau and AWS features and best practices, recommending and implementing improvements. Qualifications - - Proven experience as a Tableau Administrator, with strong skills in Tableau Server and Tableau Desktop. - Experience with AWS, particularly with services relevant to hosting and managing Tableau Server (e.g., EC2, S3, RDS). - Familiarity with SQL and experience working with various databases. Knowledge of data integration, ETL processes, and data warehousing principles. - Strong problem-solving skills and the ability to work in a fast-paced environment. - Excellent communication and collaboration skills. - Relevant certifications in Tableau and AWS are a plus. A Tableau Administrator, also known as a Tableau Server Administrator, is responsible for managing and maintaining Tableau Server, a platform that enables organizations to create, share, and collaborate on data visualizations and dashboards. Here's a typical job description for a Tableau Admin 1. Server Administration Install, configure, and maintain Tableau Server to ensure its reliability, performance, and security. 2. User Management Manage user accounts, roles, and permissions on Tableau Server, ensuring appropriate access control. 3. Security Implement security measures, including authentication, encryption, and access controls, to protect sensitive data and dashboards. 4. Data Source Connections Set up and manage connections to various data sources, databases, and data warehouses for data extraction. 5. L icense Management: Monitor Tableau licensing, allocate licenses as needed, and ensure compliance with licensing agreements. 6. Backup and Recovery Establish backup and disaster recovery plans to safeguard Tableau Server data and configurations. 7. Performance Optimization Monitor server performance, identify bottlenecks, and optimize configurations to ensure smooth dashboard loading and efficient data processing. 8. Scaling Scale Tableau Server resources to accommodate increasing user demand and data volume. 9. Troubleshooting Diagnose and resolve issues related to Tableau Server, data sources, and dashboards. 10. Version Upgrades Plan and execute server upgrades, apply patches, and stay current with Tableau releases. 11. Monitoring and Logging Set up monitoring tools and logs to track server health, user activity, and performance metrics. 12. Training and Support Provide training and support to Tableau users, helping them with dashboard development and troubleshooting. 13. Collaboration Collaborate with data analysts, data scientists, and business users to understand their requirements and assist with dashboard development. 14. Documentation Maintain documentation for server configurations, procedures, and best practices. 15. Governance Implement data governance policies and practices to maintain data quality and consistency across Tableau dashboards. 16. Integration Collaborate with IT teams to integrate Tableau with other data management systems and tools. 17. Usage Analytics Generate reports and insights on Tableau usage and adoption to inform decision-making. 18. Stay Current Keep up-to-date with Tableau updates, new features, and best practices in server administration. A Tableau Administrator plays a vital role in ensuring that Tableau is effectively utilized within an organization, allowing users to harness the power of data visualization and analytics for informed decision-making.
Posted 1 month ago
4.0 - 7.0 years
10 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Must-Have Qualifications: AWS Expertise: Strong hands-on experience with AWS data services including Glue, Redshift, Athena, S3, Lake Formation, Kinesis, Lambda, Step Functions, EMR , and CloudWatch . ETL/ELT Engineering: Deep proficiency in designing robust ETL/ELT pipelines with AWS Glue (PySpark/Scala), Python, dbt , or other automation frameworks. Data Modeling: Advanced knowledge of dimensional (Star/Snowflake) and normalised data modeling, optimised for Redshift and S3-based lakehouses . Programming Skills: Proficient in Python, SQL, and PySpark , with automation and scripting skills for data workflows. Architecture Leadership: Demonstrated experience leading large-scale AWS data engineering projects across teams and domains. Pre-sales & Consulting: Proven experience working with clients, responding to technical RFPs, and designing cloud-native data solutions. Advanced PySpark Expertise: Deep hands-on experience in writing optimized PySpark code for distributed data processing, including transformation pipelines using DataFrames , RDDs , and Spark SQL , with a strong grasp of lazy evaluation , catalyst optimizer , and Tungsten execution engine . Performance Tuning & Partitioning: Proven ability to debug and optimize Spark jobs through custom partitioning strategies , broadcast joins , caching , and checkpointing , with proficiency in tuning executor memory , shuffle configurations , and leveraging Spark UI for performance diagnostics in large-scale data workloads (>TB scale).
Posted 1 month ago
4.0 - 6.0 years
20 - 25 Lacs
Noida
Work from Office
Technical Requirements SQL (Advanced level): Strong command of complex SQL logic, including window functions, CTEs, pivot/unpivot, and be proficient in stored procedure/SQL script development. Experience writing maintainable SQL for transformations. Python for ETL : Ability to write modular and reusable ETL logic using Python. Familiarity with JSON manipulation and API consumption. ETL Pipeline Development : Experienced in developing ETL/ELT pipelines, data profiling, validation, quality/health check, error handling, logging and notifications, etc. Nice-to-Have Skills Knowledge of CI/CD practices for data workflows. Key Responsibilities Collaborate with analysts and data architects to develop and test ETL pipelines using SQL and Python in Data Brick and Yellowbrick. Perform related data quality checks and implement validation frameworks. Optimize queries for performance and cost-efficiency Technical Requirements SQL (Advanced level): Strong command of complex SQL logic, including window functions, CTEs, pivot/unpivot, and be proficient in stored procedure/SQL script development. Experience writing maintainable SQL for transformations. Python for ETL : Ability to write modular and reusable ETL logic using Python. Familiarity with JSON manipulation and API consumption. ETL Pipeline Development : Experienced in developing ETL/ELT pipelines, data profiling, validation, quality/health check, error handling, logging and notifications, etc. Nice-to-Have Skills: Experiences with AWS Redshift, Databrick and Yellow brick, Knowledge of CI/CD practices for data workflows. Roles and Responsibilities Leverage expertise in AWS Redshift, PostgreSQL, Databricks, and Yellowbrick to design and implement scalable data solutions. Partner with data analysts and architects to build and test robust ETL pipelines using SQL and Python. Develop and maintain data validation frameworks to ensure high data quality and reliability. Optimize database queries to enhance performance and ensure cost-effective data processing.
Posted 1 month ago
6.0 - 10.0 years
8 - 12 Lacs
Gurugram
Work from Office
About The Role : Role Purpose Data Analyst, Data Modeling, Data Pipeline, ETL Process, Tableau, SQL, Snowflake. Do Strong expertise in data modeling, data warehousing, and ETL processes. - Proficient in SQL and experience with data warehousing tools (e.g., Snowflake, Redshift, BigQuery) and ETL tools (e.g., Talend, Informatica, SSIS). - Demonstrated ability to lead and manage complex projects involving cross-functional teams. - Excellent analytical, problem-solving, and organizational skills. - Strong communication and leadership abilities, with a track record of mentoring and developing team members. - Experience with data visualization tools (e.g., Tableau, Power BI) is a plus. - Preference to candidates with experience in ETL using Python, Airflow or DBT Build capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Partner with team leaders to brainstorm and identify training themes and learning issues to better serve the client Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback2Self- ManagementProductivity, efficiency, absenteeism, Training Hours, No of technical training completed
Posted 1 month ago
3.0 - 6.0 years
2 - 6 Lacs
Chennai
Work from Office
AWS Lambda Glue Kafka/Kinesis RDBMS Oracle, MySQL, RedShift, PostgreSQL, Snowflake Gateway Cloudformation / Terraform Step Functions Cloudwatch Python Pyspark Job role & responsibilities: Looking for a Software Engineer/Senior Software engineer with hands on experience in ETL projects and extensive knowledge in building data processing systems with Python, pyspark and Cloud technologies(AWS). Experience in development in AWS Cloud (S3, Redshift, Aurora, Glue, Lambda, Hive, Kinesis, Spark, Hadoop/EMR) Required Skills: Amazon Kinesis, Amazon Aurora, Data Warehouse, SQL, AWS Lambda, Spark, AWS QuickSight Advanced Python Skills Data Engineering ETL and ELT Skills Experience of Cloud Platforms (AWS or GCP or Azure) Mandatory skills- Datawarehouse, ETL, SQL, Python, AWS Lambda, Glue, AWS Redshift.
Posted 1 month ago
3.0 - 5.0 years
4 - 8 Lacs
Pune
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 1 month ago
5.0 - 8.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Roles and Responsibilities: Experience in GLUE AWS Experience with one or more of the followingSpark, Scala, Python, and/or R . Experience in API development with NodeJS Experience with AWS (S3, EC2) or other cloud provider Experience in Data Virtualization tools like Dremio and Athena is a plus Should be technically proficient in Big Data concepts Should be technically proficient in Hadoop and noSQL (MongoDB) Good communication and documentation skills
Posted 1 month ago
8.0 - 12.0 years
4 - 8 Lacs
Pune
Work from Office
Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languagesScala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data toolsHadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud servicesEC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systemsStorm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks Location:Pune/ Mumbai/ Bangalore/ Chennai
Posted 1 month ago
12.0 - 15.0 years
13 - 17 Lacs
Mumbai
Work from Office
12+ Years experience in Big data Space across Architecture, Design, Development, testing & Deployment, full understanding in SDLC. 1. Experience of Hadoop and related technology stack experience 2. Experience of the Hadoop Eco-system(HDP+CDP) / Big Data (especially HIVE) Hand on experience with programming languages such as Java/Scala/python Hand-on experience/knowledge on Spark3. Being responsible and focusing on uptime and reliable running of all or ingestion/ETL jobs4. Good SQL and used to work in a Unix/Linux environment is a must.5. Create and maintain optimal data pipeline architecture.6. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.7. Good to have cloud experience8. Good to have experience for Hadoop integration with data visualization tools like PowerBI. Location Mumbai, Pune, Chennai, Hyderabad, Coimbatore, Kolkata
Posted 1 month ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.
Posted 1 month ago
4.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include Comprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Container based solutions. Strong experience with Node.js and AWS stack - AWS Lambda, AWS APIGateway, AWS CDK, AWS DynamoDB, AWS SQS. Experience with infrastructure as a code using AWS CDK.Expertise in encryption and decryption techniques for securing APIs, API Authentication and Authorization Primarily more experience is required on Lambda and APIGateway. Candidates having the AWS Certified Cloud Practitioner / AWS Certified Developer Associate certifications will be preferred Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle
Posted 1 month ago
2.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong experience in SQL. Strong experience in DBT. Strong experience in Data warehousing concepts. Strong experience in AWS or any other Cloud knowledge. Redshift is good to have Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
5.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes
Posted 1 month ago
15.0 - 20.0 years
5 - 9 Lacs
Mumbai
Work from Office
Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience3–15 years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus
Posted 1 month ago
1.0 - 3.0 years
3 - 7 Lacs
Chennai
Hybrid
Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake
Posted 1 month ago
1.0 - 3.0 years
2 - 5 Lacs
Chennai
Work from Office
Mandatory Skills: AWS, Python, SQL, spark, Airflow, SnowflakeResponsibilities Create and manage cloud resources in AWS Data ingestion from different data sources which exposes data using different technologies, such asRDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations Develop an infrastructure to collect, transform, combine and publish/distribute customer data. Define process improvement opportunities to optimize data collection, insights and displays. Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible Identify and interpret trends and patterns from complex data sets Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. Key participant in regular Scrum ceremonies with the agile teams Proficient at developing queries, writing reports and presenting findings Mentor junior members and bring best industry practices
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Gurugram
Work from Office
Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Primary skills: Technology->Cloud Platform->AWS App Development->Amazon Redshift Preferred Skills: Technology->Cloud Platform->AWS App Development->Amazon Redshift Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit * Location of posting is subject to business requirements
Posted 1 month ago
5.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Design, develop, and implement machine learning models and statistical algorithms Analyze large datasets to extract meaningful insights and trends Collaborate with stakeholders to define business problems and deliver data-driven solutions Optimize and scale machine learning models for production environments Present analytical findings and recommendations in a clear, actionable manner Key Skills: Proficiency in Python, R, and SQL Experience with ML libraries like TensorFlow, PyTorch, or Scikit-learn Strong knowledge of statistical methods and data visualization tools Excellent problem-solving and storytelling skills
Posted 1 month ago
4.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
PySpark, Python, SQL Strong focus on big data processing,which is core to data engineering. AWS Cloud Services (Lambda, Glue, S3, IAM) Indicates working with cloud-based data pipelines. Airflow, GitHub Essential for orchestration and version control in data workflows.
Posted 1 month ago
3.0 - 8.0 years
12 - 16 Lacs
Mangaluru, Hyderabad, Bengaluru
Work from Office
We're looking for a Senior Backend Developer who thrives at the intersection of software engineering and data engineering . This role involves architecting and optimizing complex, high-throughput backend systems that power data-driven products at scale. If you have deep backend chops, strong database expertise across RDBMS platforms, and hands-on experience with large-scale data workflows, we'd love to hear from you. Key Responsibilities 1. Leadership Project Delivery Lead backend development teams, ensuring adherence to Agile practices and development best practices. Collaborate across product, frontend, DevOps, and data teams to design, build, and deploy robust features and services. Drive code quality through reviews, mentoring, and enforcing design principles. 2. Research Innovation Conduct feasibility studies on emerging technologies, frameworks, and methodologies. Design and propose innovative solutions for complex technical challenges using data-centric approaches. Continuously improve system design with a forward-thinking mindset. 3. System Architecture Optimization Design scalable, distributed, and secure system architectures. Optimize and refactor legacy systems to improve performance, maintainability, and scalability. Define best practices around observability, logging, and resiliency. 4. Database Data Engineering Design, implement, and optimize relational databases (PostgreSQL, MySQL, SQL Server, etc.). Develop efficient SQL queries, stored procedures, indexes, and schema migrations. Collaborate with data engineering teams on ETL/ELT pipelines , data ingestion, transformation, and warehousing. Work with large datasets , batch processing, and streaming data (e.g., Kafka, Spark, Airflow). Ensure data integrity, consistency, and security across backend and analytics pipelines. Must-Have Skills Backend Development: TypeScript, Node.js (or equivalent backend framework), REST/GraphQL API design. Databases Storage: Strong proficiency in PostgreSQL , plus experience with other RDBMS like MySQL , SQL Server , or Oracle . Familiarity with NoSQL (e.g., Redis, MongoDB) and columnar/OLAP stores (e.g., ClickHouse, Redshift). Awareness on Data Engineering : Hands-on work with data ingestion , transformation , pipelines , and data orchestration tools. Exposure to tools like Apache Airflow , Kafka , Spark , or dbt . Cloud Infrastructure: Proficiency with AWS (Lambda, EC2, RDS, S3, IAM, CloudWatch). DevOps CI/CD: Experience with Docker, Kubernetes, GitHub Actions or similar CI/CD pipelines. Architecture: Experience designing secure, scalable, and fault-tolerant backend systems. Agile SDLC: Strong understanding of Agile workflows, SDLC best practices, and version control (Git). Nice-to-Have Skills Experience with event-driven architectures or microservices . Exposure to data warehouse environments (e.g., Snowflake, BigQuery). Knowledge of backend-for-frontend collaboration (especially with React.js). Familiarity with data cataloging, data governance, and lineage tools. Preferred Qualifications Bachelor's or Master's in Computer Science, Software Engineering, or a related technical field. Proven experience leading backend/data projects in enterprise or startup environments. Strong system design, analytical, and problem-solving skills. Awareness of cybersecurity best practices in cloud and backend development.
Posted 1 month ago
5.0 - 7.0 years
5 - 9 Lacs
Chennai
Work from Office
Design,develop, and maintain scalable data pipelines and systems to support thecollection, integration, and analysis of healthcare and enterprise data. Theprimary responsibilities of this role include designing and implementingefficient data pipelines, architecting robust data models, and adhering to datamanagement best practices. In this position, you will play a crucial part intransforming raw data into meaningful insights, through development of semanticdata layers, enabling data-driven decision-making across the organization. Theideal candidate will possess strong technical skills, a keen understanding ofdata architecture, and a passion for optimizing data processes. What you will do Design and implement scalable and efficient data pipelines to acquire, transform, and integrate data from various sources, such as electronic health records (EHR), medical devices, claims data, and back-office enterprise data Develop data ingestion processes, including data extraction, cleansing, and validation, ensuring data quality and integrity throughout the pipeline Collaborate with cross-functional teams, including subject matter experts, analysts, and engineers, to define data requirements and ensure data pipelines meet the needs of data-driven initiatives Design and implement data integration strategies to merge disparate datasets, enabling comprehensive and holistic analysis Implement data governance practices and ensure compliance with healthcare data standards, regulations (e.g., HIPAA), and security protocols Monitor and troubleshoot pipeline and data model performance, identifying and addressing bottlenecks, and ensuring optimal system performance and data availability Design and implement data models that align with domain requirements, ensuring efficient data storage, retrieval, and delivery Apply data modeling best practices and standards to ensure consistency, scalability, and reusability of data models Implement data quality checks and validation processes to ensure the accuracy, completeness, and consistency of healthcare data Develop and enforce data governance policies and procedures, including data lineage, architecture, and metadata management Collaborate with stakeholders to define data quality metrics and establish data quality improvement initiatives Document data engineering processes, methodologies, and data flows for knowledge sharing and future reference Stay up to date with emerging technologies, industry trends, and healthcare data standards to drive innovation and ensure compliance Who you are 4+ years strong programming skills in object-oriented languages such as Python Proficiency in SQL Hands on experience with data integration tools, ETL/ELT frameworks, and data warehousing concepts Hands on experience with data modeling and schema design, including concepts such as star schema, snowflake schema and data normalization Familiarity with healthcare data standards (e.g., HL7, FHIR), electronic health records (EHR), medical coding systems (e.g., ICD-10, SNOMED CT), and relevant healthcare regulations (e.g., HIPAA) Hands on experience with big data processing frameworks such as Apache Hadoop, Apache Spark, etc. Working knowledge of cloud computing platforms (e.g., AWS, Azure, GCP) and related services (e.g., DMS, S3, Redshift, BigQuery) Experience integrating heterogeneous data sources, aligning data models and mapping between different data schemas Understanding of metadata management principles and tools for capturing, storing, and managing metadata associated with data models and semantic data layers Ability to track the flow of data and its transformations across data models, ensuring transparency and traceability Understanding of data governance principles, data quality management, and data security best practices Strong problem-solving and analytical skills with the ability to work with complex datasets and data integration challenges Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams Education Bachelor's or Master's degree in computer science, information systems, or a relatedfield. Proven experience as a Data Engineer or similar role with a focus on healthcaredata. Soft Skills: Attention to detail. Good oral and written communication skills in English language. Or Proficient in English communication, both written and verbal. Dedicated self-starter with excellent people skills. Quick learner and a go-getter. Effective time and project management. Analytical thinker and a great team player. Strong leadership, interpersonal &problem-solving skills
Posted 1 month ago
4.0 - 8.0 years
9 - 12 Lacs
Chennai
Work from Office
Job Title: Data Engineer Location: Chennai (Hybrid) Summary Design,develop, and maintain scalable data pipelines and systems to support thecollection, integration, and analysis of healthcare and enterprise data. Theprimary responsibilities of this role include designing and implementingefficient data pipelines, architecting robust data models, and adhering to datamanagement best practices. In this position, you will play a crucial part intransforming raw data into meaningful insights, through development of semanticdata layers, enabling data-driven decision-making across the organization. Theideal candidate will possess strong technical skills, a keen understanding ofdata architecture, and a passion for optimizing data processes. Accountability Design and implement scalable and efficient data pipelines to acquire, transform, and integrate data from various sources, such as electronic health records (EHR), medical devices, claims data, and back-office enterprise data Develop data ingestion processes, including data extraction, cleansing, and validation, ensuring data quality and integrity throughout the pipeline Collaborate with cross-functional teams, including subject matter experts, analysts, and engineers, to define data requirements and ensure data pipelines meet the needs of data-driven initiatives Design and implement data integration strategies to merge disparate datasets, enabling comprehensive and holistic analysis Implement data governance practices and ensure compliance with healthcare data standards, regulations (e.g., HIPAA), and security protocols Monitor and troubleshoot pipeline and data model performance, identifying and addressing bottlenecks, and ensuring optimal system performance and data availability Design and implement data models that align with domain requirements, ensuring efficient data storage, retrieval, and delivery Apply data modeling best practices and standards to ensure consistency, scalability, and reusability of data models Implement data quality checks and validation processes to ensure the accuracy, completeness, and consistency of healthcare data Develop and enforce data governance policies and procedures, including data lineage, architecture, and metadata management Collaborate with stakeholders to define data quality metrics and establish data quality improvement initiatives Document data engineering processes, methodologies, and data flows for knowledge sharing and future reference Stay up to date with emerging technologies, industry trends, and healthcare data standards to drive innovation and ensure compliance Skills 4+ years strong programming skills in object-oriented languages such as Python Proficiency in SQL Hands on experience with data integration tools, ETL/ELT frameworks, and data warehousing concepts Hands on experience with data modeling and schema design, including concepts such as star schema, snowflake schema and data normalization Familiarity with healthcare data standards (e.g., HL7, FHIR), electronic health records (EHR), medical coding systems (e.g., ICD-10, SNOMED CT), and relevant healthcare regulations (e.g., HIPAA) Hands on experience with big data processing frameworks such as Apache Hadoop, Apache Spark, etc. Working knowledge of cloud computing platforms (e.g., AWS, Azure, GCP) and related services (e.g., DMS, S3, Redshift, BigQuery) Experience integrating heterogeneous data sources, aligning data models and mapping between different data schemas Understanding of metadata management principles and tools for capturing, storing, and managing metadata associated with data models and semantic data layers Ability to track the flow of data and its transformations across data models, ensuring transparency and traceability Understanding of data governance principles, data quality management, and data security best practices Strong problem-solving and analytical skills with the ability to work with complex datasets and data integration challenges Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams Education Bachelor's orMaster's degree in computer science, information systems, or a relatedfield. Provenexperience as a Data Engineer or similar role with a focus on healthcare data.
Posted 1 month ago
8.0 - 12.0 years
13 - 17 Lacs
Gurugram
Work from Office
As a Technical Lead, you will be responsible for leading the development and delivery of the platforms. This includes overseeing the entire product lifecycle from the solution until execution and launch, building the right team close collaboration with business and product teams. Primary Responsibilities: Work collaboratively with a core team of architects, business teams, and developers spread across different locations to assist solutions of the requirement from a technical perspective. Coordinate closely with the design and analysis teams to fill gaps in the Product Requirement Documents (PRDs), identify the edge use cases, and build a full-proof solution. Lead and mentor a team of technical engineers, providing guidance and support in the execution of product development projects. Proficient with practices like Test Driven Development, Continuous Integration, Continuous Delivery, and Infrastructure automation. Experience with JS stack - ReactJS, NodeJS. Experience with Database Engines - MySQL, and PostgreSQL with proven knowledge of Database migrations, high throughput and low latency use cases. Experience with key-value stores like Redis, MongoDB and similar. Preferred knowledge of distributed massive data processing technologies like Spark, Trino or similar with proven experience in event-driven data pipelines. Experience with Data warehouses like BigQuery, Redshift, or similar. Experience with Testing Code Coverage tools like Jest, JUnit, Sonarqube, or similar. Experience with CI/CD tools like Jenkins, AWS CodePipelines, or similar. Experience improving the security posture of the product. Research and build POCs using available frameworks to ensure feasibility. Create technical design documents and present the architectural details to a larger audience. Participate in architecture and design reviews for projects that require complex technical solutions. Experience with Microservices Architecture and exposure to cloud services platform GCP/AWS. Develop reusable frameworks/components and POCs to accelerate the development of projects. Discover third-party APIs/accounts for integration purposes (and in a cost-effective manner). Responsible for making the overall product architecture scalable from a future perspective. Establish and maintain engineering processes, standards, and best practices to ensure consistency and quality across projects. Coordinate with cross-functional teams to resolve technical challenges, mitigate risks, and ensure timely delivery of products. Stay updated with the latest trends and advancements in related technologies. Functional Responsibilities: Facilitate daily stand-up meetings, sprint planning, sprint review, and retrospective meetings. Work closely with the product owner to prioritize the product backlog and ensure that user stories are well-defined and ready for development. Identify and address issues or conflicts that may impact project delivery or team morale. Lead the Risk Management and Incident Management activities. Conduct regular performance and code reviews, and provide feedback to the development team members to improve professional development. Experience with Agile project management tools such as Jira, and Trello. Required Skills: Bachelor's degree in Computer Science, Engineering, or related field. 10+ years of experience with a proven track record of successfully leading cross-functional engineering teams in the development and delivery of complex products. Strong facilitation and coaching skills, with the ability to guide teams through Agile ceremonies and practices. Excellent communication and interpersonal skills, with the ability to build rapport and trust with team members and stakeholders. Proven ability to identify and resolve impediments or conflicts that may arise during the development process. Ability to thrive in a fast-paced, dynamic environment and adapt quickly to changing priorities. Continuous growth and learner mindset with a passion for Agile principles and practices and a commitment to ongoing professional development. Have experience with Agile methodologies and participated in sprints and scrums. Ability to take ownership of complex tasks and deliver while mentoring team members
Posted 1 month ago
3.0 - 6.0 years
10 - 15 Lacs
Gurugram, Bengaluru
Work from Office
3+ years of experience in data science roles, working with tabular data in large-scale projects. Experience in feature engineering and working with methods such as XGBoost, LightGBM, factorization machines , and similar algorithms. Experience in adtech or fintech industries is a plus. Familiarity with clickstream data, predictive modeling for user engagement, or bidding optimization is highly advantageous. MS or PhD in mathematics, computer science, physics, statistics, electrical engineering, or a related field. Proficiency in Python (3.9+), with experience in scientific computing and machine learning tools (e.g., NumPy, Pandas, SciPy, scikit-learn, matplotlib, etc.). Familiarity with deep learning frameworks (such as TensorFlow or PyTorch) is a plus. Strong expertise in applied statistical methods, A/B testing frameworks, advanced experiment design, and interpreting complex experimental results. Experience querying and processing data using SQL and working with distributed data storage solutions (e.g., AWS Redshift, Snowflake, BigQuery, Athena, Presto, MinIO, etc.). Experience in budget allocation optimization, lookalike modeling, LTV prediction, or churn analysis is a plus. Ability to manage multiple projects, prioritize tasks effectively, and maintain a structured approach to complex problem-solving. Excellent communication and collaboration skills to work effectively with both technical and business teams.
Posted 1 month ago
1.0 - 4.0 years
6 - 10 Lacs
Mumbai, Mumbai Suburban
Work from Office
Own, execute and drive the CRM campaigns (Push Notifications, Email, SMS, in app & Browser Notifications, Whatsapp ) to drive channel revenue and visits. KEY DELIVERABLES: Creation, testing and delivery of campaigns for Push and Browser Notifications, email, SMS & other own media channels. CRM Channel planning for Push Notifications, Email, SMS, Browser Notification Identifying and driving improvement projects for CTR, and campaign efficiency Co-ordination with creative team to get copy and creatives done as per schedule Create automated campaigns by building workflows, data rules and the data creation on redshift, creating schemas and workflows on Campaign Management Platform. Build workflows to create and maintain reports for campaign performance. DESIRABLE SKILLS: Essential Attributes Teamwork, Communication and Interpersonal Skills, Analytical Skills, Dependability and a Strong Work Ethic, Adaptability and Flexibility Data handling on Excel and preferably on Redshift Experience in category/marketing planning and execution. Desired Attributes Understanding about Email Marketing, Push Notifications and other CRM channels Basic understanding about segmentation & marketing
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough