Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Gracenote is the content business unit of Nielsen that powers the world of media entertainment. Our metadata solutions help media and entertainment companies around the world deliver personalized content search and discovery, connecting audiences with the content they love. We’re at the intersection of people and media entertainment. With our cutting-edge technology and solutions, we help audiences easily find TV shows, movies, music and sports across multiple platforms. As the world leader in entertainment data and services, we power the world’s top streaming platforms, cable and satellite TV providers, media companies, consumer electronics manufacturers, music services and automakers to navigate and succeed in the competitive streaming world. Our metadata entertainment solutions have a global footprint of 80+ countries, 100K+ channels and catalogs, 70+ sports and 100M+ music tracks, all across 35 languages. Job Purpose As a senior DBA, your role is to own the databases in our data pipeline and the data governance of our Data Strategy. Our Data Strategy underpins our suite of Client-facing Applications, Data Science activities, Operational Tools and Business Analytics Responsibilities Architect and build scalable, resilient and cost-effective data storage solutions to support complex data pipelines The architecture has two facets: Storage and Compute. The DBA is responsible for designing and maintaining the different tiers of the data storage, including (but not limited to) archival, long-term persistent storage, transactional and reporting storage Design, implement and maintain various data pipelines such as self-service ingestion tools, exports to application-specific warehouses, and indexing activities The senior DBA is responsible for data modeling, as well as designing, implementing and maintaining various data catalogs, to support data transformation and product requirements Configure and deploy databases on AWS cloud, ensuring optimal performance and scalability Monitor database activities for compliance and security purposes Set up and manage backup and recovery strategies for cloud databases ensuring availability and quality Monitor database performance metrics and identify areas for optimization .Create scripts for database configuration and provisioning Collaborate with Data Science to understand, translate, and integrate methodologies into engineering build pipelines Partner with product owners to translate complex business requirements into technical solutions, imparting design and architecture guidance Provide expert mentorship to project teams on technology strategy, cultivating advanced skill sets in software engineering and modern SDLC Stay informed about the latest technologies and methodologies by participating in industry forums, having an active peer network, and engaging actively with customers Cultivate a team environment focused on continuous learning, where innovative technologies are developed and refined through teamwork Must have skills: Experience with languages such as ANSI SQL, TSQL, PL/pgSQL, PLSQL, plus database design, normalization, server tuning, and query plan optimization.6+ years of professional DBA experience with large datastores including HA and DR planning and support Software Engineering experience with programming languages such as Java, Scala, and Python Demonstrated understanding and experience with big data tools such as Kafka, Spark and Trino/PrestoExperience with orchestration tools such as Airflow Comfortable using Docker and Kubernetes for container management DevOps experience deploying and tuning the applications you’ve built Monitoring tools such as Datadog, Prometheus, Grafana, Cloudwatch Good to have: Software Engineering experience with Unix Shell Understanding of File Systems Experience configuring database replication (physical and/or logical) ETL experience (3rd party and proprietary).A personal technical blogA personal (Git) repository of side projectsParticipation in an open-source community Qualifications B.E / B.Tech / BCA/ MCA in Computer Science, Engineering or a related subject Strong Computer Science fundamentals Comfortable with version control systems such as git A thirst for learning new Tech and keeping up with industry advances Excellent communication and knowledge-sharing skills Comfortable working with technical and non-technical teams Strong debugging skills Comfortable providing and receiving code review feedback A positive attitude, adaptability, enthusiasm, and a growth mindset About Nielsen: By connecting clients to audiences, we fuel the media industry with the most accurate understanding of what people listen to and watch. To discover what audiences love, we measure across all channels and platforms—from podcasts to streaming TV to social media. And when companies and advertisers are truly connected to their audiences, they can see the most important opportunities and accelerate growth. Do you want to move the industry forward with Nielsen? Our people are the driving force. Your thoughts, ideas, and expertise can propel us forward. Whether you have fresh thinking around maximizing a new technology or you see a gap in the market, we are here to listen and act. Our team is made strong by a diversity of thoughts, experiences, skills, and backgrounds. You’ll enjoy working with smart, fun, curious colleagues, who are passionate about their work. Come be part of a team that motivates you to do your best work! Show more Show less
Posted 5 days ago
2.0 years
0 Lacs
Dholera, Gujarat, India
On-site
About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement scalable offline data pipelines for manufacturing systems including AMHS, MES, SCADA, PLCs, vision systems, and sensor data. Design and optimize ETL/ELT workflows using Python, Spark, SQL, and orchestration tools (e.g., Airflow) to transform raw data into actionable insights. Lead database design and performance tuning across SQL and NoSQL systems, optimizing schema design, queries, and indexing strategies for manufacturing data. Enforce robust data governance by implementing data quality checks, lineage tracking, access controls, security measures, and retention policies. Optimize storage and processing efficiency through strategic use of formats (Parquet, ORC), compression, partitioning, and indexing for high-performance analytics. Implement streaming data solutions (using Kafka/RabbitMQ) to handle real-time data flows and ensure synchronization across control systems. Building dashboards using analytics tools like Grafana. Good Understanding of Hadoop ecosystem. Develop standardized data models and APIs to ensure consistency across manufacturing systems and enable data consumption by downstream applications. Collaborate cross-functionally with Platform Engineers, Data Scientists, Automation teams, IT Operations, Manufacturing, and Quality departments. Mentor junior engineers while establishing best practices, documentation standards, and fostering a data-driven culture throughout the organization. Essential Attributes - Expertise in Python programming for building robust ETL/ELT pipelines and automating data workflows. Proficiency with Hadoops ecosystem. Hands-on experience with Apache Spark (PySpark) for distributed data processing and large-scale transformations. Strong proficiency in SQL for data extraction, transformation, and performance tuning across structured datasets. Proficient in using Apache Airflow to orchestrate and monitor complex data workflows reliably. Skilled in real-time data streaming using Kafka or RabbitMQ to handle data from manufacturing control systems. Experience with both SQL and NoSQL databases, including PostgreSQL, Timescale DB, and MongoDB, for managing diverse data types. In-depth knowledge of data lake architectures and efficient file formats like Parquet and ORC for high-performance analytics. Proficient in containerization and CI/CD practices using Docker and Jenkins or GitHub Actions for production-grade deployments. Strong understanding of data governance principles, including data quality, lineage tracking, and access control. Ability to design and expose RESTful APIs using FastAPI or Flask to enable standardized and scalable data consumption. Qualifications - BE/ME Degree in Computer science, Electronics, Electrical Desired Experience Level - Masters+ 2 Years of relevant experience. Bachelors+4 Years of relevant experience. Experience with semiconductor industry is a plus Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Technical Skills: 8+ years of hands-on experience in SQL development, query optimization, and performance tuning. Expertise in ETL tools (SSIS, Azure ADF, Databricks, Snowflake or similar) and relational databases (SQL Server, PostgreSQL, MySQL, Oracle). Strong understanding of data warehousing concepts, data modeling, indexing strategies, and query execution plans. Proficiency in writing efficient stored procedures, views, triggers, and functions for large datasets. Experience working with structured and semi-structured data (CSV, JSON, XML, Parquet). Hands-on experience in data validation, cleansing, and reconciliation to maintain high data quality. Exposure to real-time and batch data processing techniques. Nice-to-have: Experience with Azure/Other Data Engineering (ADF, Azure SQL, Synapse, Databricks, Snowflake), Python, Spark, NoSQL databases, and reporting tools like Power BI or Tableau. Strong problem-solving skills and the ability to troubleshoot ETL failures and performance issues. Ability to collaborate with business and analytics teams to understand and implement data requirements. Show more Show less
Posted 5 days ago
2.0 years
0 Lacs
Dholera, Gujarat, India
On-site
About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement scalable offline data pipelines for manufacturing systems including AMHS, MES, SCADA, PLCs, vision systems, and sensor data. Design and optimize ETL/ELT workflows using Python, Spark, SQL, and orchestration tools (e.g., Airflow) to transform raw data into actionable insights. Lead database design and performance tuning across SQL and NoSQL systems, optimizing schema design, queries, and indexing strategies for manufacturing data. Enforce robust data governance by implementing data quality checks, lineage tracking, access controls, security measures, and retention policies. Optimize storage and processing efficiency through strategic use of formats (Parquet, ORC), compression, partitioning, and indexing for high-performance analytics. Implement streaming data solutions (using Kafka/RabbitMQ) to handle real-time data flows and ensure synchronization across control systems. Building dashboards using analytics tools like Grafana. Good Understanding of Hadoop ecosystem. Develop standardized data models and APIs to ensure consistency across manufacturing systems and enable data consumption by downstream applications. Collaborate cross-functionally with Platform Engineers, Data Scientists, Automation teams, IT Operations, Manufacturing, and Quality departments. Mentor junior engineers while establishing best practices, documentation standards, and fostering a data-driven culture throughout the organization. Essential Attributes - Expertise in Python programming for building robust ETL/ELT pipelines and automating data workflows. Proficiency with Hadoops ecosystem. Hands-on experience with Apache Spark (PySpark) for distributed data processing and large-scale transformations. Strong proficiency in SQL for data extraction, transformation, and performance tuning across structured datasets. Proficient in using Apache Airflow to orchestrate and monitor complex data workflows reliably. Skilled in real-time data streaming using Kafka or RabbitMQ to handle data from manufacturing control systems. Experience with both SQL and NoSQL databases, including PostgreSQL, Timescale DB, and MongoDB, for managing diverse data types. In-depth knowledge of data lake architectures and efficient file formats like Parquet and ORC for high-performance analytics. Proficient in containerization and CI/CD practices using Docker and Jenkins or GitHub Actions for production-grade deployments. Strong understanding of data governance principles, including data quality, lineage tracking, and access control. Ability to design and expose RESTful APIs using FastAPI or Flask to enable standardized and scalable data consumption. Qualifications - BE/ME Degree in Computer science, Electronics, Electrical Desired Experience Level - Masters+ 2 Years of relevant experience. Bachelors+4 Years of relevant experience. Experience with semiconductor industry is a plus Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Dholera, Gujarat, India
On-site
About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement a scalable, offline Data Lake for structured, semi-structured, and unstructured data in an on-premises, air-gapped environment. Collaborate with Data Engineers, Factory IT, and Edge Device teams to enable seamless data ingestion and retrieval across the platform. Integrate with upstream systems like MES, SCADA, and process tools to capture high-frequency manufacturing data efficiently. Monitor and maintain system health, including compute resources, storage arrays, disk I/O, memory usage, and network throughput. Optimize Data Lake performance via partitioning, deduplication, compression (Parquet/ORC), and implementing effective indexing strategies. Select, integrate, and maintain tools like Apache Hadoop, Spark, Hive, HBase, and custom ETL pipelines suitable for offline deployment. Build custom ETL workflows for bulk and incremental data ingestion using Python, Spark, and shell scripting. Implement data governance policies covering access control, retention periods, and archival procedures with security and compliance in mind. Establish and test backup, failover, and disaster recovery protocols specifically designed for offline environments. Document architecture designs, optimization routines, job schedules, and standard operating procedures (SOPs) for platform maintenance. Conduct root cause analysis for hardware failures, system outages, or data integrity issues. Drive system scalability planning for multi-fab or multi-site future expansions. Essential Attributes (Tech-Stacks) - Hands-on experience designing and maintaining offline or air-gapped Data Lake environments. Deep understanding of Hadoop ecosystem tools: HDFS, Hive, Map-Reduce, HBase, YARN, zookeeper and Spark. Expertise in custom ETL design, large-scale batch and stream data ingestion. Strong scripting and automation capabilities using Bash and Python. Familiarity with data compression formats (ORC, Parquet) and ingestion frameworks (e.g., Flume). Working knowledge of message queues such as Kafka or RabbitMQ, with focus on integration logic. Proven experience in system performance tuning, storage efficiency, and resource optimization. Qualifications - BE/ ME in Computer science, Machine Learning, Electronics Engineering, Applied mathematics, Statistics. Desired Experience Level - 4 Years relevant experience post Bachelors 2 Years relevant experience post Masters Experience with semiconductor industry is a plus Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Dholera, Gujarat, India
On-site
About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement a scalable, offline Data Lake for structured, semi-structured, and unstructured data in an on-premises, air-gapped environment. Collaborate with Data Engineers, Factory IT, and Edge Device teams to enable seamless data ingestion and retrieval across the platform. Integrate with upstream systems like MES, SCADA, and process tools to capture high-frequency manufacturing data efficiently. Monitor and maintain system health, including compute resources, storage arrays, disk I/O, memory usage, and network throughput. Optimize Data Lake performance via partitioning, deduplication, compression (Parquet/ORC), and implementing effective indexing strategies. Select, integrate, and maintain tools like Apache Hadoop, Spark, Hive, HBase, and custom ETL pipelines suitable for offline deployment. Build custom ETL workflows for bulk and incremental data ingestion using Python, Spark, and shell scripting. Implement data governance policies covering access control, retention periods, and archival procedures with security and compliance in mind. Establish and test backup, failover, and disaster recovery protocols specifically designed for offline environments. Document architecture designs, optimization routines, job schedules, and standard operating procedures (SOPs) for platform maintenance. Conduct root cause analysis for hardware failures, system outages, or data integrity issues. Drive system scalability planning for multi-fab or multi-site future expansions. Essential Attributes (Tech-Stacks) - Hands-on experience designing and maintaining offline or air-gapped Data Lake environments. Deep understanding of Hadoop ecosystem tools: HDFS, Hive, Map-Reduce, HBase, YARN, zookeeper and Spark. Expertise in custom ETL design, large-scale batch and stream data ingestion. Strong scripting and automation capabilities using Bash and Python. Familiarity with data compression formats (ORC, Parquet) and ingestion frameworks (e.g., Flume). Working knowledge of message queues such as Kafka or RabbitMQ, with focus on integration logic. Proven experience in system performance tuning, storage efficiency, and resource optimization. Qualifications - BE/ ME in Computer science, Machine Learning, Electronics Engineering, Applied mathematics, Statistics. Desired Experience Level - 4 Years relevant experience post Bachelors 2 Years relevant experience post Masters Experience with semiconductor industry is a plus Show more Show less
Posted 5 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Are you passionate about data? Does the prospect of dealing with massive volumes of data excite you? Do you want to build data engineering solutions that process billions of records a day in a scalable fashion using AWS technologies? Do you want to create the next-generation tools for intuitive data access? If so, Amazon Finance Technology (FinTech) is for you! FinTech is seeking a Data Engineer to join the team that is shaping the future of the finance data platform. The team is committed to building the next generation big data platform that will be one of the world's largest finance data warehouse to support Amazon's rapidly growing and dynamic businesses, and use it to deliver the BI applications which will have an immediate influence on day-to-day decision making. Amazon has culture of data-driven decision-making, and demands data that is timely, accurate, and actionable. Our platform serves Amazon's finance, tax and accounting functions across the globe. As a Data Engineer, you should be an expert with data warehousing technical components (e.g. Data Modeling, ETL and Reporting), infrastructure (e.g. hardware and software) and their integration. You should have deep understanding of the architecture for enterprise level data warehouse solutions using multiple platforms (RDBMS, Columnar, Cloud). You should be an expert in the design, creation, management, and business use of large data-sets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. The candidate is expected to be able to build efficient, flexible, extensible, and scalable ETL and reporting solutions. You should be enthusiastic about learning new technologies and be able to implement solutions using them to provide new functionality to the users or to scale the existing platform. Excellent written and verbal communication skills are required as the person will work very closely with diverse teams. Having strong analytical skills is a plus. Above all, you should be passionate about working with huge data sets and someone who loves to bring data-sets together to answer business questions and drive change. Our ideal candidate thrives in a fast-paced environment, relishes working with large transactional volumes and big data, enjoys the challenge of highly complex business contexts (that are typically being defined in real-time), and, above all, is a passionate about data and analytics. In this role you will be part of a team of engineers to create world's largest financial data warehouses and BI tools for Amazon's expanding global footprint. Key job responsibilities Design, implement, and support a platform providing secured access to large datasets. Interface with tax, finance and accounting customers, gathering requirements and delivering complete BI solutions. Model data and metadata to support ad-hoc and pre-built reporting. Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Tune application and query performance using profiling tools and SQL. Analyze and solve problems at their root, stepping back to understand the broader context. Learn and understand a broad range of Amazon’s data resources and know when, how, and which to use and which not to use. Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using AWS. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for datasets. Triage many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment. Basic Qualifications Experience with SQL 1+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2968106 Show more Show less
Posted 5 days ago
1.0 - 3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do We are seeking exceptional candidates with experience and passion to fill an Analyst position in the Survey Operations & Analytics (SOA) team at BCG. This team is part of the Center for Customer Insights team and rolling-up to Global Advantage Practice Area. The team is an integral part of BCG’s strategy to deliver superior value and sustained impact to clients. SOA specializes in supporting BCG case teams on client projects that include primary data collection (surveys). The team has capabilities that enable it to engage across all elements of the primary research value chain with both BCG case teams as well as external service providers. Special emphasis is placed on application of advanced analytics to survey data, providing key outputs that drive critical insights. Additionally, the team also builds models, simulations, and visualizations to maximize usability and impact of these analytics outputs. At SOA, you will be joining a highly innovative team with an entrepreneurial mindset. You will be working directly with BCG’s core consulting business in a highly dynamic and fast paced environment. In addition to bringing your own unique skills and capabilities to the table; you will be expected to leverage opportunities to learn and grow intellectually through formal and on-job training. What You'll Bring Education Bachelor’s/Master’s degree with demonstrated high academic achievement in analytics, data science, or mathematics and relevant work experience in market/consumer research data analytics (projects/coursework/internships) Candidates with the following educational backgrounds will be preferred –Statistics/Applied Statistics, Operational Research, Economics, or Mathematics Experience 1-3 years of relevant experience in the field of market research and data analytics Strong analytical capabilities – data management, processing, and analysis Strong hands-on experience Advanced Excel, and PowerPoint Knowledge of additional tools such as SPSS, R, Python, Alteryx, Tableau, SAS, Market Sight, VBA, SQL will be an added advantage Strong knowledge of and affinity for database and analytical tool management Strong ability to work with multiple, geographically distributed teams in a fast-paced environment, multi-task & operate effectively in a matrix organization prioritization and expectation management Able to engage with senior stakeholders independently, prioritize work and manage stakeholder expectations Strong interpersonal skills and credibility – collaborative, team player with strong work ethic and service excellence orientation Effective written & verbal communication (English) Who You'll Work With Colleagues in the Survey Operations & Analytics team who engage with BCG consultants and topic experts for efficient survey execution and analytics of survey data. Your work will support data-driven consumer insights, driving strategic decisions for our clients. Additional info YOU'RE GOOD AT Business oriented – understanding business objectives and context of associated market research Fast learner – able to grasp and apply market research knowledge to interpret and discuss elements of survey design (sampling, quotas, methodology, questionnaire structure etc.) Team player – able to collaboration with survey programmers, third-party vendors, and partners for implementation of online surveys and data collection Eye for detail – able to engage on quality review of online surveys before launch, data handling and management capabilities to validate and clean data prior to further processing Sound knowledge of statistics and application of statistical theoretical concepts (univariate, bivariate and multivariate methods). Able to quickly learn and use specialized survey data analysis tools such as SPSS, Sawtooth, etc. to deliver practical data analytics outcomes Strong data interpretation capabilities. Learn and use Alteryx and advanced Excel for survey data transformation and processing as well as for creation of formula/macro driven models and simulators Knack for graphical representation of analytical outputs. Learn and use visualization tools including PowerPoint, Tableau and Market Sight to represent analytics output in the most appealing and insightful manner Working with virtual, multicultural global teams, requiring cross-time zone engagement Working in a fast-paced and dynamic environment, dealing with ambiguity and unstructured situations Multi-tasking; including networking, relationship building as well as informal Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify. Show more Show less
Posted 5 days ago
4.0 - 6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. • The team this role supports is responsible for the critical function of managing lineups and metadata across various media channels such as cable, broadcast and video on demand etc. that encompasses a wide scope dealing with data from both local and national providers. • This role requires flexibility to provide technical support across different time zones, including both IST and US business hours on a rotational basis. The Support Engineer will serve as the primary point of contact for customer and stakeholder inquiries, responsible for troubleshooting issues, following Standard Operating Procedures (SOPs) and escalating to the development team when necessary. • This role requires close collaboration with cross-functional teams to ensure timely and effective issue resolution, driving operational stability and enhancing customer satisfaction. • In this role, you will debug and attempt to resolve issues independently using SOPs. If unable to resolve an issue, you will escalate it to the next level of support, involving the development team as needed. Your goal will be to ensure efficient handling of support requests and to continuously improve SOPs for recurring issues. Responsibilities:- • Serve as the first point of contact for customer or stakeholder issues, providing prompt support during the US/IST time zone on a rotational basis. Execute SOPs to troubleshoot and resolve recurring issues and ensuring adherence to documented procedures • Provide technical support and troubleshooting for cloud-based infrastructure and services, including compute, storage, networking and security components • Collaborate with application, security and other internal teams to resolve complex issues related to cloud-based services and infrastructure • Escalate unresolved issues to the development team and provide clear documentation of troubleshooting steps taken. Document and maintain up-to-date SOPs, troubleshooting guides, and technical support documentation. Collaborate with cross-functional teams to ensure issues are tracked, escalated, and resolved efficiently • Proactively identify and suggest process improvements to enhance support quality and response times Key Skills: Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field Experience Range- 4 to 6 years Must have skills: Proficiency in Java programming language Excellent SQL skills for querying and analyzing data from various database systems Good understanding of database concepts and technologies Good problem-solving skills and ability to work independently Good proficiency in AWS cloud platform and its core services Good written and verbal communication skills with a strong emphasis on technical documentation Ability to follow and create detailed SOPs for various support tasks Good to have skills: Knowledge of Scala/Python for scripting and automation Familiarity with big data technologies such as Spark and Hive Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less
Posted 5 days ago
4.0 - 6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. • The team this role supports is responsible for the critical function of managing lineups and metadata across various media channels such as cable, broadcast and video on demand etc. that encompasses a wide scope dealing with data from both local and national providers. • This role requires flexibility to provide technical support across different time zones, including both IST and US business hours on a rotational basis. The Support Engineer will serve as the primary point of contact for customer and stakeholder inquiries, responsible for troubleshooting issues, following Standard Operating Procedures (SOPs) and escalating to the development team when necessary. • This role requires close collaboration with cross-functional teams to ensure timely and effective issue resolution, driving operational stability and enhancing customer satisfaction. • In this role, you will debug and attempt to resolve issues independently using SOPs. If unable to resolve an issue, you will escalate it to the next level of support, involving the development team as needed. Your goal will be to ensure efficient handling of support requests and to continuously improve SOPs for recurring issues. Responsibilities:- • Serve as the first point of contact for customer or stakeholder issues, providing prompt support during the US/IST time zone on a rotational basis. Execute SOPs to troubleshoot and resolve recurring issues and ensuring adherence to documented procedures • Provide technical support and troubleshooting for cloud-based infrastructure and services, including compute, storage, networking and security components • Collaborate with application, security and other internal teams to resolve complex issues related to cloud-based services and infrastructure • Escalate unresolved issues to the development team and provide clear documentation of troubleshooting steps taken. Document and maintain up-to-date SOPs, troubleshooting guides, and technical support documentation. Collaborate with cross-functional teams to ensure issues are tracked, escalated, and resolved efficiently • Proactively identify and suggest process improvements to enhance support quality and response times Key Skills: Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field Experience Range- 4 to 6 years Must have skills: Proficiency in Java programming language Excellent SQL skills for querying and analyzing data from various database systems Good understanding of database concepts and technologies Good problem-solving skills and ability to work independently Good proficiency in AWS cloud platform and its core services Good written and verbal communication skills with a strong emphasis on technical documentation Ability to follow and create detailed SOPs for various support tasks Good to have skills: Knowledge of Scala/Python for scripting and automation Familiarity with big data technologies such as Spark and Hive Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less
Posted 5 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Do you have the technical skill to build BI solutions that process billions of rows a day using AWS technologies? Do you want to create next-generation tools for intuitive data access? Do you wake up in the middle of the night with new ideas that will benefit your customers? Are you persistent in bringing your ideas to fruition? First things first, you know SQL and data modelling like the back of your hand. You also need to know Big Data and MPP systems. You have a history of coming up with innovative solutions to complex technical problems. You are a quick and willing learner of new technologies and have examples to prove your aptitude. You are not tool-centric; you determine what technology works best for the problem at hand and apply it accordingly. You can explain complex concepts to your non-technical customers in simple terms. Key job responsibilities Work with SDE teams and business stakeholders to understand data requirements and design data ingress flow for team Lead the design, model, and implementation of large, evolving, structured, semi-structured and unstructured datasets Evaluate and implement efficient distributed storage and query techniques Interact and integrate with internal and external teams and systems to extract, transform, and load data from a wide variety of sources Implement robust and maintainable code with clear and maintained documentation Implement test automation on code implemented through unit testing and integration testing Work in a tech stack which is a mix of NAWS services and legacy ETL tools within Amazon About The Team Data Insights, Metrics & Reporting team (DIMR) is the central data engineering team in Amazon Warehousing & Distribution org which is responsible for 4 things mainly - Building and maintaining data engineering and reporting infrastructure using NAWS to support internal/external data use-cases. Building data ingestions pipelines from any kind of upstream data sources which include (but not limited to) real time event streaming services, data lakes, manual file uploads, etc. Building mechanisms to vend data to internal team members or external sellers with right data handling techniques in place. Build robust data mart to support diverse use-cases powered by GenAI tool. Basic Qualifications 1+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2970459 Show more Show less
Posted 5 days ago
4.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. • The team this role supports is responsible for the critical function of managing lineups and metadata across various media channels such as cable, broadcast and video on demand etc. that encompasses a wide scope dealing with data from both local and national providers. • This role requires flexibility to provide technical support across different time zones, including both IST and US business hours on a rotational basis. The Support Engineer will serve as the primary point of contact for customer and stakeholder inquiries, responsible for troubleshooting issues, following Standard Operating Procedures (SOPs) and escalating to the development team when necessary. • This role requires close collaboration with cross-functional teams to ensure timely and effective issue resolution, driving operational stability and enhancing customer satisfaction. • In this role, you will debug and attempt to resolve issues independently using SOPs. If unable to resolve an issue, you will escalate it to the next level of support, involving the development team as needed. Your goal will be to ensure efficient handling of support requests and to continuously improve SOPs for recurring issues. Responsibilities:- • Serve as the first point of contact for customer or stakeholder issues, providing prompt support during the US/IST time zone on a rotational basis. Execute SOPs to troubleshoot and resolve recurring issues and ensuring adherence to documented procedures • Provide technical support and troubleshooting for cloud-based infrastructure and services, including compute, storage, networking and security components • Collaborate with application, security and other internal teams to resolve complex issues related to cloud-based services and infrastructure • Escalate unresolved issues to the development team and provide clear documentation of troubleshooting steps taken. Document and maintain up-to-date SOPs, troubleshooting guides, and technical support documentation. Collaborate with cross-functional teams to ensure issues are tracked, escalated, and resolved efficiently • Proactively identify and suggest process improvements to enhance support quality and response times Key Skills: Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field Experience Range- 4 to 6 years Must have skills: Proficiency in Java programming language Excellent SQL skills for querying and analyzing data from various database systems Good understanding of database concepts and technologies Good problem-solving skills and ability to work independently Good proficiency in AWS cloud platform and its core services Good written and verbal communication skills with a strong emphasis on technical documentation Ability to follow and create detailed SOPs for various support tasks Good to have skills: Knowledge of Scala/Python for scripting and automation Familiarity with big data technologies such as Spark and Hive Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less
Posted 5 days ago
7.0 - 11.0 years
15 - 25 Lacs
Mumbai, Mumbai (All Areas)
Work from Office
Key Responsibilities: Should have experience in below Design, develop, and implement a Data Lake House architecture on AWS, ensuring scalability, flexibility, and performance. Build ETL/ELT pipelines for ingesting, transforming, and processing structured and unstructured data. Collaborate with cross-functional teams to gather data requirements and deliver data solutions aligned with business needs. Develop and manage data models, schemas, and data lakes for analytics, reporting, and BI purposes. Implement data governance practices, ensuring data quality, security, and compliance. Perform data integration between on-premise and cloud systems using AWS services. Monitor and troubleshoot data pipelines and infrastructure for reliability and scalability. Skills and Qualifications: 7 + years of experience in data engineering, with a focus on cloud data platforms. Strong experience with AWS services: S3, Glue, Redshift, Athena, Lambda, IAM, RDS, and EC2. Hands-on experience in building data lakes, data warehouses, and lake house architectures. Should have experience in ETL/ELT pipelines using tools like AWS Glue, Apache Spark, or similar. Expertise in SQL and Python or Java for data processing and transformations. Familiarity with data modeling and schema design in cloud environments. Understanding of data security and governance practices, including IAM policies and data encryption. Experience with big data technologies (e.g., Hadoop, Spark) and data streaming services (e.g., Kinesis, Kafka). Have lending domain knowledge will be added advantage Preferred Skills: Experience with Databricks or similar platforms for data engineering. Familiarity with DevOps practices for deploying data solutions on AWS (CI/CD pipelines). Knowledge of API integration and cloud data migration strategies.
Posted 5 days ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Overview Job Description Leading AI-driven Global Supply Chain Solutions Software Product Company and one of Glassdoor’s “Best Places to Work.” Seeking an astute individual that has a strong technical foundation with ability to be hands-on on developing/building automation to improve efficiency, productivity, and customer experience. Deep knowledge of industry best practices, with the ability to implement them working with larger team cloud, support, and the product teams. Scope We are seeking a highly skilled AI/Prompt Engineer to design, implement, and maintain artificial intelligence (AI) and machine learning (ML) solutions for our organization. The ideal candidate will have a deep understanding of AI and ML technologies, as well as experience with data analysis, software development, and cloud computing. Primary Responsibilities Design and implement AI/ conversational AI solutions and ML solutions to solve business problems and to improve customer experience and operational efficiency. Develop and maintain machine learning models using tools such as TensorFlow, Keras, and PyTorch Collaborate with cross-functional teams to identify opportunities for AI and ML solutions and develop prototypes and proof-of-concepts. Develop and maintain data pipelines and ETL processes to support AI and ML workflows. Monitor and optimize model performance, accuracy, and scalability Stay up to date with emerging AI and ML technologies and evaluate their potential impact on our organization. Develop and maintain technical documentation, including architecture diagrams, design documents, and standard operating procedures Provide technical guidance and mentorship to other members of the data engineering and software development teams. Develop and maintain chatbots and voice assistants using tools such as Dialogflow, Amazon Lex, and Microsoft Bot Framework Develop and maintain integrations with third-party systems and APIs to support conversational AI workflows. Develop and maintain technical documentation, including architecture diagrams, design documents, and standard operating procedures. Provide technical guidance and mentorship to other team members. What We Are Looking For Bachelor’s degree in computer science, Information Technology, or a related field with 3+ years of experience in conversational AI engineering, design, and implementation Strong understanding of NLP technologies, including intent recognition, entity extraction, and sentiment analysis Experience with software development, including proficiency in Python and familiarity with software development best practices and tools (Git, Agile methodologies, etc.) Familiarity with cloud computing platforms (AWS, Azure, Google Cloud) and related services (S3, EC2, Lambda, etc.) Experience with big data technologies (Hadoop, Spark, etc.) Experience with containerization (Docker, Kubernetes) Experience with data visualization tools (Tableau, Power BI, etc.) Experience with reinforcement learning and/or generative models. Experience with machine learning technologies and frameworks (TensorFlow, Keras, etc.) Experience with big data technologies (Hadoop, Spark, etc.) Strong communication and collaboration skills Strong attention to detail and ability to prioritize tasks effectively. Strong problem-solving and analytical skills Ability to work independently and as part of a team. Strong attention to detail and ability to prioritize tasks effectively. Experience working with cloud platforms like AWS, Google Cloud, or Azure. Knowledge of big data technologies such as Apache Spark, Hadoop, or Kafka is a plus. Strong problem-solving and analytical skills. Ability to work in an agile and fast-paced development environment. Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description As a data engineer are you looking for opportunity to be among software developers, machine learning scientists to build a data platform that not only caters to BI and reporting but also extends to machine learning applications? As a data engineer in AEE, you will: - Design, implement and support an analytical data infrastructure serving both business intelligence and machine learning applications Managing AWS resources including EC2,Redshift,EMR-Spark etc Collaborate with applied scientist to integrate and build data pipeline as necessary for building and training machine learning models in AEE Collaborate with Product Managers, Financial and Business analysts to recognize and help adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Key job responsibilities As a data engineer are you looking for opportunity to be among software developers, machine learning scientists to build a data platform that not only caters to BI and reporting but also extends to machine learning applications? As a data engineer in AEE, you will: - Design, implement and support an analytical data infrastructure serving both business intelligence and machine learning applications Managing AWS resources including EC2,Redshift,EMR-Spark etc Collaborate with applied scientist to integrate and build data pipeline as necessary for building and training machine learning models in AEE Collaborate with Product Managers, Financial and Business analysts to recognize and help adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Basic Qualifications 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Tamil Nadu Job ID: A2988350 Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Are you excited by the challenge of pushing the boundaries with the latest advancements in computer vision and multi-modal Large Language Models? Does the idea of working on the edge of AI research and applying it to create industry-defining software solutions resonate with you? At Nielsen Sports, we provide the most comprehensive and trusted data and analytics for the global sports ecosystem, helping clients understand media value, fan behavior, and sponsorship effectiveness. This role will place you at the forefront of this mission, architecting and implementing sophisticated AI systems that unlock novel insights from complex multimedia sports data. We are looking for Principal / Sr Principal Engineers to join us on this mission. Key Responsibilities: Technical Leadership & Architecture: Lead the design and architecture of scalable and robust AI/ML systems, particularly focusing on computer vision and LLM applications for sports media analysis Model Development & Training: Spearhead the development, training, and fine-tuning of sophisticated deep learning models (e.g., object detectors like RT-DETR, custom classifiers, generative models) on large-scale, domain-specific datasets (like sports imagery and video) Generalized Object Detection: Develop and implement advanced computer vision models capable of identifying a wide array of visual elements (e.g., logos, brand assets, on-screen graphics) in diverse and challenging sports content, including those not seen during training LLM & GenAI Integration: Explore and implement solutions leveraging LLMs and Generative AI for tasks such as content summarization, insight generation, data augmentation, and model validation (e.g., using vision models to verify detections) System Implementation & Deployment: Build and deploy production-ready AI/ML pipelines, ensuring efficiency, scalability, and maintainability. This includes developing APIs and integrating models into broader Nielsen Sports platforms UI/UX for AI Tools: Guide or contribute to the development of internal tools and simple user interfaces (using frameworks like Streamlit, Gradio, or web stacks) to showcase model capabilities, facilitate data annotation, and allow for human-in-the-loop validation Research & Innovation: Stay at the forefront of advancements in computer vision, LLMs, and related AI fields. Evaluate and prototype new technologies and methodologies to drive innovation within Nielsen Sports Mentorship & Collaboration: Mentor junior engineers, share knowledge, and collaborate effectively with cross-functional teams including product managers, data scientists, and operations Performance Optimization: Optimize model performance for speed and accuracy, and ensure efficient use of computational resources (including cloud platforms like AWS, GCP, or Azure) Data Strategy: Contribute to data acquisition, preprocessing, and augmentation strategies to enhance model performance and generalization Required Qualifications: Bachelors of Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field 5+ years (for Principal / MTS-4) / 8+ years (for Senior Principal / MTS-5) of hands-on experience in developing and deploying AI/ML models, with a strong focus on Computer Vision Proven experience in training deep learning models for object detection (e.g., YOLO, Faster R-CNN, DETR variants like RT-DETR) on custom datasets Experience in finetuning LLMs like Llama 2/3, Mistral, or open-source models available on Hugging Face using libraries such as Hugging Face Transformers, PEFT, or specialized frameworks like Axolotl/Unsloth Proficiency in Python and deep learning frameworks such as PyTorch (preferred) or TensorFlow/Keras Demonstrable experience with Multi Modal Large Language Models (LLMs) and their application, including familiarity with transformer architectures and fine-tuning techniques Experience with developing simple UIs for model interaction or data annotation (e.g., using Streamlit, Gradio, Flask/Django) Solid understanding of MLOps principles and experience with tools for model deployment, monitoring, and lifecycle management (e.g., Docker, Kubernetes, Kubeflow, MLflow) Strong software engineering fundamentals, including code versioning (Git), testing, and CI/CD practices Excellent problem-solving skills and the ability to work with complex, large-scale datasets Strong communication and collaboration skills, with the ability to convey complex technical concepts to diverse audiences Full Stack Development experience in any one stack Preferred Qualifications / Bonus Skills: Experience with Generative AI vision models for tasks like image analysis, description, or validation Track record of publications in top-tier AI/ML/CV conferences or journals Experience working with sports data (broadcast feeds, social media imagery, sponsorship analytics) Proficiency in cloud computing platforms (AWS, GCP, Azure) and their AI/ML services Experience with video processing and analysis techniques Familiarity with data pipeline and distributed computing tools (e.g., Apache Spark, Kafka) Demonstrated ability to lead technical projects and mentor team members Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Greetings from TCS! TCS is hiring for Pyspark Developer Desired Experience Range : 5 to 9 years Job Location : Chennai Required Skills : PySpark, Hadoop, Big Data Responsibility of / Expectations from the Role • Minimum of 5 years of hands-on experience in designing, building, and optimizing data pipelines, data models and spark-based applications in Big Data environments. • Extensive experience and deep expertise in data modeling and data model concepts, particularly with large datasets, ensuring the design and implementation of efficient, scalable, and high-performing data models. • Strong software engineer, you take pride in what you’re developing with a strong testing ethos. • String proficiency in Python programming, with a focus on data processing and analysis • Proven experience working with PySpark for large-scale data processing and analysis • Extensive experience in designing, building, and optimizing Big Data pipelines and architectures, with a strong focus on supporting both batch and real-time data workflows. • In-Depth Knowledge of Spark, including experience with Spark performance tuning techniques to optimal processing efficiency • Strong SQL skills for querying and manipulating large datasets, with experience in optimizing complex queries for performance Regards Monisha Show more Show less
Posted 5 days ago
130.0 years
6 - 9 Lacs
Hyderābād
On-site
Job Description Senior Manager, Data Engineer The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Responsibilities Designs, builds, and maintains data pipeline architecture - ingest, process, and publish data for consumption. Batch processes collected data, formats data in an optimized way to bring it analyze-ready Ensures best practices sharing and across the organization Enables delivery of data-analytics projects Develops deep knowledge of the company's supported technology; understands the whole complexity/dependencies between multiple teams, platforms (people, technologies) Communicates intensively with other platform/competencies to comprehend new trends and methodologies being implemented/considered within the company ecosystem Understands the customer and stakeholders business needs/priorities and helps building solutions that support our business goals Establishes and manages the close relationship with customers/stakeholders Has overview of the date engineering market development to be able to come up/explore new ways of delivering pipelines to increase their value/contribution Builds “community of practice” leveraging experience from delivering complex analytics projects Is accountable for ensuring that the team delivers solutions with high quality standards, timeliness, compliance and excellent user experience Contributes to innovative experiments, specifically to idea generation, idea incubation and/or experimentation, identifying tangible and measurable criteria Qualifications: Bachelor’s degree in Computer Science, Data Science, Information Technology, Engineering or a related field. 3+ plus years of experience as a Data Engineer or in a similar role, with a strong portfolio of data projects. 3+ plus years experience SQL skills, with the ability to write and optimize queries for large datasets. 1+ plus years experience and proficiency in Python for data manipulation, automation, and pipeline development. Experience with Databricks including creating notebooks and utilizing Spark for big data processing. Strong experience with data warehousing solution (such as Snowflake), including schema design and performance optimization. Experience with data governance and quality management tools, particularly Collibra DQ. Strong analytical and problem-solving skills, with an attention to detail. SAP Basis experience working on SAP S/4HANA deployments on Cloud platforms (example: AWS, GCP or Azure). Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are: We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for: Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business, Business, Business Intelligence (BI), Business Management, Contractor Management, Cost Reduction, Database Administration, Database Optimization, Data Engineering, Data Flows, Data Infrastructure, Data Management, Data Modeling, Data Optimization, Data Quality, Data Visualization, Design Applications, ETL Tools, Information Management, Management Process, Operating Cost Reduction, Senior Program Management, Social Collaboration, Software Development, Software Development Life Cycle (SDLC) {+ 1 more} Preferred Skills: Job Posting End Date: 08/13/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R350686
Posted 5 days ago
7.0 years
5 - 7 Lacs
Hyderābād
On-site
You strive to be an essential member of a diverse team of visionaries dedicated to making a lasting impact. Don’t pass up this opportunity to collaborate with some of the brightest minds in the field and deliver best-in-class solutions to the industry. As a Senior Lead Data Architect at JPMorgan Chase within the Consumer and Community Banking Data Technology, you are an integral part of a team that works to develop high-quality data architecture solutions for various software applications, platform and data products. Drive significant business impact and help shape the global target state architecture through your capabilities in multiple data architecture domains. Job responsibilities Represents the data architecture team at technical governance bodies and provides feedback regarding proposed improvements regarding data architecture governance practices Evaluates new and current technologies using existing data architecture standards and frameworks Regularly provides technical guidance and direction to support the business and its technical teams, contractors, and vendors Design secure, high-quality, scalable solutions and reviews architecture solutions designed by others Drives data architecture decisions that impact data product & platform design, application functionality, and technical operations and processes Serves as a function-wide subject matter expert in one or more areas of focus Actively contributes to the data engineering community as an advocate of firmwide data frameworks, tools, and practices in the Software Development Life Cycle Influences peers and project decision-makers to consider the use and application of leading-edge technologies Advises junior architects and technologists Required qualifications, capabilities, and skills 7+ years of hands-on practical experience delivering data architecture and system designs, data engineer, testing, and operational stability Advanced knowledge of architecture, applications, and technical processes with considerable in-depth knowledge in data architecture discipline and solutions (e.g., data modeling, native cloud data services, business intelligence, artificial intelligence, machine learning, data domain driven design, etc.) Practical cloud based data architecture and deployment experience, preferably AWS Practical SQL development experiences in cloud native relational databases, e.g. Snowflake, Athena, Postgres Ability to deliver various types of data models with multiple deployment targets, e.g. conceptual, logical and physical data models deployed as an operational vs. analytical data stores Advanced in one or more data engineering disciplines, e.g. streaming, ELT, event processing Ability to tackle design and functionality problems independently with little to no oversight Ability to evaluate current and emerging technologies to select or recommend the best solutions for the future state data architecture Preferred qualifications, capabilities, and skills Financial services experience, card and banking a big plus Practical experience in modern data processing technologies, e.g., Kafka streaming, DBT, Spark, Airflow, etc. Practical experience in data mesh and/or data lake Practical experience in machine learning/AI with Python development a big plus Practical experience in graph and semantic technologies, e.g. RDF, LPG, Neo4j, Gremlin Knowledge of architecture assessments frameworks, e.g. Architecture Trade off Analysis
Posted 5 days ago
5.0 years
5 - 8 Lacs
Hyderābād
On-site
Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Your future duties and responsibilities Position: Senior Software Engineer Experience: 5-10 years Category: Software Development/ Engineering Shift Timings: 1:00 pm to 10:00 pm Main location: Hyderabad Work Type: Work from office Skill: Spark (PySpark), Python and SQL Employment Type: Full Time Position ID: J0625-0219 Required qualifications to be successful in this role Must have Skills: 5+ yrs. Development experience with Spark (PySpark), Python and SQL. Extensive knowledge building data pipelines Hands on experience with Databricks Devlopment Strong experience with Strong experience developing on Linux OS. Experience with scheduling and orchestration (e.g. Databricks Workflows,airflow, prefect, control-m). Good to have skills: Solid understanding of distributed systems, data structures, design principles. Agile Development Methodologies (e.g. SAFe, Kanban, Scrum). Comfortable communicating with teams via showcases/demos. Play key role in establishing and implementing migration patterns for the Data Lake Modernization project. Actively migrate use cases from our on premises Data Lake to Databricks on GCP. Collaborate with Product Management and business partners to understand use case requirements and reporting. Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation) . Document and showcase feature designs/workflows. Participate in team meetings and discussions around product development. Stay up to date on industry latest industry trends and design patterns. 3+ years experience with GIT. 3+ years experience with CI/CD (e.g. Azure Pipelines). Experience with streaming technologies, such as Kafka, Spark. Experience building applications on Docker and Kubernetes. Cloud experience (e.g. Azure, Google). Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 5 days ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do Who We Are Our Global HR Shared Services Center (HRSSC), located across three global hubs—India, Costa Rica, and Portugal—deliver centralized and efficient support for HR processes worldwide. By working here, you’ll be part of our team that’s transforming how we deliver world-class HR services to our employees, globally. We support the full employee lifecycle with precision, enable efficiency gains through smart systems and collaboration, whilst delivering measurable outcomes that enhance every employee’s journey at BCG. You will be a key member of our Global HR Shared Services Center (HRSSC), supporting regional and local HR teams and employees worldwide with administrative HR processes. You’ll collaborate with colleagues across multiple geographies and time zones, forming part of a close-knit global HR network that values teamwork, ownership, and continuous learning. Key Responsibilities Include Preparing and processing employee paperwork for new hires, promotions, transfers, exits, and changes. Maintaining personnel records in compliance with legal requirements and internal standards. Supporting onboarding and background verification including induction plans and welcome communications. Managing employee documentation requests including verification letters, references, and visa invitation letters. Delivering reporting on employee data (e.g. distribution lists, anniversaries, milestones). Supporting internal audits with required documentation and timely response. What You'll Bring A graduation degree. ~1–3+ years of relevant experience in HR operations, shared services, or a process-driven role. Familiarity with Workday (preferred) or other HR ERP systems. Proficiency in Microsoft Office (Excel, PowerPoint, Outlook, Word, Visio). Experience working in a professional services or multinational environment. Fluent verbal and written English language skills are required. Who You'll Work With Be part of a respected global brand that invests in its people. Exposure to world-class HR systems, like Workday. Work in a culture that prioritizes learning, diversity, and inclusion. Join a growing team where your work directly drives global impact. Additional info You’re Good At Thriving under pressure with exceptional attention to detail. Staying flexible and reliable in a dynamic and changing environment. Managing multiple tasks with structure and discipline. Handling sensitive data with confidentiality and professionalism. Communicating clearly and professionally, both in writing and speech. Creating meaningful experiences for every customer through exceptional service. Collaborating across cultures and time zones. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify. Show more Show less
Posted 5 days ago
7.0 years
0 Lacs
Hyderābād
On-site
JOB DESCRIPTION You strive to be an essential member of a diverse team of visionaries dedicated to making a lasting impact. Don’t pass up this opportunity to collaborate with some of the brightest minds in the field and deliver best-in-class solutions to the industry. As a Senior Lead Data Architect at JPMorgan Chase within the Consumer and Community Banking Data Technology, you are an integral part of a team that works to develop high-quality data architecture solutions for various software applications, platform and data products. Drive significant business impact and help shape the global target state architecture through your capabilities in multiple data architecture domains. Job responsibilities Represents the data architecture team at technical governance bodies and provides feedback regarding proposed improvements regarding data architecture governance practices Evaluates new and current technologies using existing data architecture standards and frameworks Regularly provides technical guidance and direction to support the business and its technical teams, contractors, and vendors Design secure, high-quality, scalable solutions and reviews architecture solutions designed by others Drives data architecture decisions that impact data product & platform design, application functionality, and technical operations and processes Serves as a function-wide subject matter expert in one or more areas of focus Actively contributes to the data engineering community as an advocate of firmwide data frameworks, tools, and practices in the Software Development Life Cycle Influences peers and project decision-makers to consider the use and application of leading-edge technologies Advises junior architects and technologists Required qualifications, capabilities, and skills 7+ years of hands-on practical experience delivering data architecture and system designs, data engineer, testing, and operational stability Advanced knowledge of architecture, applications, and technical processes with considerable in-depth knowledge in data architecture discipline and solutions (e.g., data modeling, native cloud data services, business intelligence, artificial intelligence, machine learning, data domain driven design, etc.) Practical cloud based data architecture and deployment experience, preferably AWS Practical SQL development experiences in cloud native relational databases, e.g. Snowflake, Athena, Postgres Ability to deliver various types of data models with multiple deployment targets, e.g. conceptual, logical and physical data models deployed as an operational vs. analytical data stores Advanced in one or more data engineering disciplines, e.g. streaming, ELT, event processing Ability to tackle design and functionality problems independently with little to no oversight Ability to evaluate current and emerging technologies to select or recommend the best solutions for the future state data architecture Preferred qualifications, capabilities, and skills Financial services experience, card and banking a big plus Practical experience in modern data processing technologies, e.g., Kafka streaming, DBT, Spark, Airflow, etc. Practical experience in data mesh and/or data lake Practical experience in machine learning/AI with Python development a big plus Practical experience in graph and semantic technologies, e.g. RDF, LPG, Neo4j, Gremlin Knowledge of architecture assessments frameworks, e.g. Architecture Trade off Analysis ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction.
Posted 5 days ago
7.0 - 8.0 years
22 - 28 Lacs
Hyderābād
On-site
Hiring: Senior Cognos TM1 Developer with DevSecOps Expertise Location: Hyderabad - Full-Time Domain: Banking & Finance Experience: 7–8 Years (Required) Notice Period: Immediate to 15 Days We’re looking for a seasoned TM1 Developer with strong DevSecOps experience to join a high-performing tech team in a BFSI environment. You'll drive TM1 development, automate CI/CD pipelines, and lead secure data platform integrations. Key Skills: IBM Cognos TM1 (Cubes, Rules, Processes, Security, REST API) DevOps Tools: Jenkins, Ansible, Git Scripting: Python, Shell Databases: PostgreSQL, Oracle, SQL Server, DB2 Containers: Docker, Kubernetes Agile, Security Compliance What You'll Do: Build & optimize TM1 applications Automate deployments & container orchestration Lead platform integration with Spark, DeltaLake Ensure secure, scalable data architecture Job Type: Full-time Pay: ₹2,200,000.00 - ₹2,800,000.00 per year Application Question(s): How many years of hands-on experience do you have in developing Cognos TM1 Cubes, Rules, and TI Processes (without using wizards)? Have you implemented CI/CD pipelines using tools like Jenkins, Ansible, or Git in a production environment? What scripting languages have you used for automation and integration tasks in TM1 projects (e.g., Python, Shell)? Describe your experience with containerization and orchestration using Docker and Kubernetes. How do you ensure security compliance and control in TM1 deployments, especially in regulated industries like BFSI? Work Location: In person Application Deadline: 19/06/2025
Posted 5 days ago
10.0 - 16.0 years
4 - 7 Lacs
Hyderābād
On-site
Skill : Java, Spark, Kafka Experience : 10 to 16 years Location : Hyderabad As Data Engineer, you will : Support in designing and rolling out the data architecture and infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources Identify data source, design and implement data schema/models and integrate data that meet the requirements of the business stakeholders Play an active role in the end-to-end delivery of AI solutions, from ideation, feasibility assessment, to data preparation and industrialization. Work with business, IT and data stakeholders to support with data-related technical issues, their data infrastructure needs as well as to build the most flexible and scalable data platform. With a strong focus on DataOps, design, develop and deploy scalable batch and/or real-time data pipelines. Design, document, test and deploy ETL/ELT processes Find the right tradeoffs between the performance, reliability, scalability, and cost of the data pipelines you implement Monitor data processing efficiency and propose solutions for improvements. Have the discipline to create and maintain comprehensive project documentation. Build and share knowledge with colleagues and coach junior profiles.
Posted 5 days ago
1.0 years
4 - 6 Lacs
Hyderābād
On-site
- 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) Business Data Technologies (BDT) makes it easier for teams across Amazon to produce, store, catalog, secure, move, and analyze data at massive scale. Our managed solutions combine standard AWS tooling, open-source products, and custom services to free teams from worrying about the complexities of operating at Amazon scale. This lets BDT customers move beyond the engineering and operational burden associated with managing and scaling platforms, and instead focus on scaling the value they can glean from their data, both for their customers and their teams. We own the one of the biggest (largest) data lakes for Amazon where 1000’s of Amazon teams can search, share, and store EB (Exabytes) of data in a secure and seamless way; using our solutions, teams around the world can schedule/process millions of workloads on a daily basis. We provide enterprise solutions that focus on compliance, security, integrity, and cost efficiency of operating and managing EBs of Amazon data. Key job responsibilities CORE RESPONSIBILITIES: · Be hands-on with ETL to build data pipelines to support automated reporting · Interface with other technology teams to extract, transform, and load data from a wide variety of data sources · Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift. · Model data and metadata for ad-hoc and pre-built reporting · Interface with business customers, gathering requirements and delivering complete reporting solutions · Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. · Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. · Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers · Participate in strategic & tactical planning discussions A day in the life As a Data Engineer, you will be working with cross-functional partners from Science, Product, SDEs, Operations and leadership to translate raw data into actionable insights for stakeholders, empowering them to make data-driven decisions. Some of the key activities include: Crafting the Data Flow: Design and build data pipelines, the backbone of our data ecosystem. Ensure the integrity of the data journey by implementing robust data quality checks and monitoring processes. Architect for Insights: Translate complex business requirements into efficient data models that optimize data analysis and reporting. Automate data processing tasks to streamline workflows and improve efficiency. Become a data detective! ensuring data availability and performance Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Knowledge of cloud services such as AWS or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2