Home
Jobs

1759 Redshift Jobs - Page 14

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description As a data engineer are you looking for opportunity to be among software developers, machine learning scientists to build a data platform that not only caters to BI and reporting but also extends to machine learning applications? As a data engineer in AEE, you will: - Design, implement and support an analytical data infrastructure serving both business intelligence and machine learning applications Managing AWS resources including EC2,Redshift,EMR-Spark etc Collaborate with applied scientist to integrate and build data pipeline as necessary for building and training machine learning models in AEE Collaborate with Product Managers, Financial and Business analysts to recognize and help adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Key job responsibilities As a data engineer are you looking for opportunity to be among software developers, machine learning scientists to build a data platform that not only caters to BI and reporting but also extends to machine learning applications? As a data engineer in AEE, you will: - Design, implement and support an analytical data infrastructure serving both business intelligence and machine learning applications Managing AWS resources including EC2,Redshift,EMR-Spark etc Collaborate with applied scientist to integrate and build data pipeline as necessary for building and training machine learning models in AEE Collaborate with Product Managers, Financial and Business analysts to recognize and help adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Basic Qualifications 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Tamil Nadu Job ID: A2988350 Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App's download from Apples and Googles store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. As an Engineering Manager, you will be a cross functional team of developers, and DevOps Engineers, using a Scrum/Agile team management approach. Provide technical expertise and guidance to team members and help develop designs for complex applications. Ability to plan tasks and project phases as well as review, comment and approve the analysis, proposed design and test strategy done by members of the team. Responsibilities Oversee the development of scalable, reliable, and cost-effective software solutions with an emphasis on quality, best-practice coding standards, and cost-effectiveness Aid with driving business unit financials, and ensures budgets and schedules meet corporate requirements Participates in corporate development of methods, techniques and evaluation criteria for projects, programs, and people Has overall control of planning, staffing, budgeting, managing expense priorities, for the team they lead Provide training, coaching, and sharing technical knowledge with less experienced staff People manager duties, including annual reviews, career guidance, and compensation planning Rapidly identify technical issues as they emerge, and asses their impact to the business Provide day-to-day work direction to a large team of developers Collaborate effectively with Data Science to understand, translate, and integrate data methodologies into the product Collaborate with product owners to translate complex business requirements into technical solutions, providing leadership in the design and architecture processes Stay informed about the latest technology and methodology by participating in industry forums, having an active peer network, and engaging actively with customers Cultivate a team environment focused on continuous learning, where innovative technologies are developed and refined through collaborative effort Key Skills Bachelor's degree in computer science, engineering or relevant 8+ years of experience in information technology solutions development and 2+ years managerial experience Proven experience in leading and managing software development teams Development background in Java AWS Cloud based environment for high-volume data processing Experience with Data Warehouses, ETL, and/or Data Lakes Experience with Databases such as Postgres, DynamoDB, or RedShift Good understanding of CI/CD principles and tools. GitLab a plus Must have the ability to provide solutions utilizing best practices for resilience, scalability, cloud optimization and security Excellent project management skills Other desirable skills Knowledge of networking principles and security best practices AWS Certification is a plus Experience with MS Project or Smartsheets Experience with Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) useful Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 6 days ago

Apply

1.0 years

4 - 6 Lacs

Hyderābād

On-site

GlassDoor logo

- 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) Business Data Technologies (BDT) makes it easier for teams across Amazon to produce, store, catalog, secure, move, and analyze data at massive scale. Our managed solutions combine standard AWS tooling, open-source products, and custom services to free teams from worrying about the complexities of operating at Amazon scale. This lets BDT customers move beyond the engineering and operational burden associated with managing and scaling platforms, and instead focus on scaling the value they can glean from their data, both for their customers and their teams. We own the one of the biggest (largest) data lakes for Amazon where 1000’s of Amazon teams can search, share, and store EB (Exabytes) of data in a secure and seamless way; using our solutions, teams around the world can schedule/process millions of workloads on a daily basis. We provide enterprise solutions that focus on compliance, security, integrity, and cost efficiency of operating and managing EBs of Amazon data. Key job responsibilities CORE RESPONSIBILITIES: · Be hands-on with ETL to build data pipelines to support automated reporting · Interface with other technology teams to extract, transform, and load data from a wide variety of data sources · Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift. · Model data and metadata for ad-hoc and pre-built reporting · Interface with business customers, gathering requirements and delivering complete reporting solutions · Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. · Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. · Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers · Participate in strategic & tactical planning discussions A day in the life As a Data Engineer, you will be working with cross-functional partners from Science, Product, SDEs, Operations and leadership to translate raw data into actionable insights for stakeholders, empowering them to make data-driven decisions. Some of the key activities include: Crafting the Data Flow: Design and build data pipelines, the backbone of our data ecosystem. Ensure the integrity of the data journey by implementing robust data quality checks and monitoring processes. Architect for Insights: Translate complex business requirements into efficient data models that optimize data analysis and reporting. Automate data processing tasks to streamline workflows and improve efficiency. Become a data detective! ensuring data availability and performance Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Knowledge of cloud services such as AWS or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 6 days ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

Our Mission: 6sense is on a mission to revolutionize how B2B organizations create revenue by predicting customers most likely to buy and recommending the best course of action to engage anonymous buying teams. 6sense Revenue AI is the only sales and marketing platform to unlock the ability to create, manage and convert high-quality pipeline to revenue. Our People: People are the heart and soul of 6sense. We serve with passion and purpose. We live by our Being 6sense values of Accountability, Growth Mindset, Integrity, Fun and One Team. Every 6sensor plays a part in defining the future of our industry-leading technology. 6sense is a place where difference-makers roll up their sleeves, take risks, act with integrity, and measure success by the value we create for our customers. We want 6sense to be the best chapter of your career. Position Overview: We are seeking a highly skilled Sr. Data Analyst to focus on backend data support and governance. The ideal candidate will have a strong background in data engineering principles, SQL, and data modeling within modern cloud data platforms. This individual will play a key role in building and maintaining a scalable and trusted data infrastructure that supports reporting across the customer journey. A working knowledge of data governance frameworks and the ability to collaborate cross-functionally is essential. Key Responsibilities: Build, maintain, and optimize data pipelines and models within our data warehouse to enable trusted downstream analytics. Develop scalable, clean, and joinable datasets to support reporting across sales, marketing, customer success, and finance functions. Collaborate closely with RevOps, data engineering, and analytics stakeholders to ensure data is structured and aligned to business needs. Support data governance by enforcing data definitions, naming conventions, and ownership models. Monitor and improve data quality, lineage, and integrity through proactive checks and documentation. Translate raw data into reusable, governed tables and metrics to support self-service and centralized reporting use cases. Assist in standardizing metrics and business definitions to drive consistent reporting across systems and teams. Qualifications: Bachelor’s degree in Computer Science, Data Science, Information Systems, or a related field. Master’s preferred. 3+ years of experience in a data analytics, analytics engineering, or backend reporting role. Expert-level SQL skills and experience working with cloud data warehouses (Snowflake, Redshift, BigQuery, etc.). Solid understanding of dimensional modeling, data architecture, and ELT pipeline development. Familiarity with data governance tools, policies, or best practices. Experience with BI platforms (Looker, Tableau, Power BI, Sigma) is a plus. Strong organizational and communication skills; ability to translate technical requirements into business impact. Our Benefits: Full-time employees can take advantage of health coverage, paid parental leave, generous paid time-off and holidays, quarterly self-care days off, and stock options. We’ll make sure you have the equipment and support you need to work and connect with your teams, at home or in one of our offices. We have a growth mindset culture that is represented in all that we do, from onboarding through to numerous learning and development initiatives including access to our LinkedIn Learning platform. Employee well-being is also top of mind for us. We host quarterly wellness education sessions to encourage self care and personal growth. From wellness days to ERG-hosted events, we celebrate and energize all 6sense employees and their backgrounds. Equal Opportunity Employer: 6sense is an Equal Employment Opportunity and Affirmative Action Employers. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. If you require reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to jobs@6sense.com. We are aware of recruiting impersonation attempts that are not affiliated with 6sense in any way. All email communications from 6sense will originate from the @6sense.com domain. We will not initially contact you via text message and will never request payments. If you are uncertain whether you have been contacted by an official 6sense employee, reach out to jobs@6sense.com Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Talend - Designing, developing, and documenting existing Talend ETL processes, technical architecture, data pipelines, and performance scaling using tools to integrate Talend data and ensure data quality in a big data environment. AWS / Snowflake - Design, develop, and maintain data models using SQL and Snowflake / AWS Redshift-specific features. Collaborate with stakeholders to understand the requirements of the data warehouse. Implement data security, privacy, and compliance measures. Perform data analysis, troubleshoot data issues, and provide technical support to end-users. Develop and maintain data warehouse and ETL processes, ensuring data quality and integrity. Stay current with new AWS/Snowflake services and features and recommend improvements to existing architecture. Design and implement scalable, secure, and cost-effective cloud solutions using AWS / Snowflake services. Collaborate with cross-functional teams to understand requirements and provide technical guidance. Show more Show less

Posted 6 days ago

Apply

4.0 - 6.0 years

1 - 4 Lacs

Hyderābād

On-site

GlassDoor logo

Job Title: Senior Data Analyst – AdTech (Team Lead) Location: Hyderabad Experience Level: 4–6 Years Employment Type: Full-time Shift Timings: 5PM - 2AM IST About the Role: We are looking for a highly experienced and hands-on Senior Data Analyst (AdTech) to lead our analytics team. This role is ideal for someone with a strong background in log-level data handling , cross-platform data engineering , and a solid command of modern BI tools . You'll play a key role in building scalable data pipelines, leading analytics strategy, and mentoring a team of analysts. Key Responsibilities: Lead and mentor a team of data analysts, ensuring quality delivery and technical upskilling. Design, develop, and maintain scalable ETL/ELT pipelines using GCP tools (BigQuery, Dataflow, Cloud Composer, Cloud Functions, Pub/Sub). Ingest and process log-level data from platforms like Google Ad Manager, Google Analytics (GA4/UA), DV360, and other advertising and marketing tech sources. Build and optimize data pipelines from diverse sources via APIs, cloud connectors, and third-party tools (e.g., Supermetrics, Fivetran, Stitch). Integrate and manage data across multiple cloud platforms and data warehouses such as BigQuery, Snowflake, DOMO, and AWS (Redshift, S3). Own the creation of data models, data marts, and analytical layers to support dashboards and deep-dive analyses. Build and maintain scalable, intuitive dashboards using Looker Studio, Tableau, Power BI, or Looker. Partner with engineering, product, revenue ops, and client teams to gather requirements and drive strategic insights from data. Ensure data governance, security, and quality standards are followed across the analytics ecosystem. Required Qualifications: 4–6 years of experience in data analytics or data engineering roles, with at least 1–2 years in a leadership capacity. Deep expertise working with log-level AdTech data—Google Ad Manager, Google Analytics, GA4, programmatic delivery logs, and campaign-level data. Strong knowledge of SQL and Google BigQuery for large-scale data querying and transformation. Hands-on experience building data pipelines using GCP tools (Dataflow, Composer, Cloud Functions, Pub/Sub, Cloud Storage). Proven experience integrating data from various APIs and third-party connectors. Experience working with multiple data warehouses: Snowflake, DOMO, AWS Redshift, etc. Strong skills in data visualization tools: Looker Studio, Tableau, Power BI, or Looker. Excellent stakeholder communication and documentation skills. Preferred Qualifications: Scripting experience in Python or JavaScript for automation and custom ETL development. Familiarity with version control (e.g., Git), CI/CD pipelines, and workflow orchestration. Exposure to privacy regulations and consent-based data handling in digital advertising (GDPR, CCPA). Experience working in agile environments and managing delivery timelines across multiple stakeholders.

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Platform Support Provide technical support and troubleshoot issues related to the Starburst Enterprise Platform. Ensure platform performance, availability, and reliability using Helm charts for resource management. Deployment And Configuration Manage deployment and configuration of the Starburst Enterprise Platform on Kubernetes using Helm charts and YAML-based values files. Build and maintain Docker images as needed to support efficient, scalable deployments and integrations. Employ GitHub Actions for streamlined CI/CD processes. User Onboarding And Support Assist in onboarding users by setting up connections, catalogs, and data consumption client tools. Address user queries and incidents, ensuring timely resolution and issue triage. Maintenance And Optimization Perform regular updates, patching, and maintenance tasks to ensure optimal platform performance. Conduct application housekeeping, user query logs, and access audits. Scripting And Automation Develop automation scripts using Python and GitHub pipelines to enhance operational efficiency. Document workflows and ensure alignment with business objectives. Broader Knowledge And Integration Maintain expertise in technologies like Immuta, Apache Ranger, Collibra, Snowflake, PostgreSQL, Redshift, Hive, Iceberg, dbt, AWS Lambda, AWS Glue, and Power BI. Provide insights and recommendations for platform improvements and integrations. New Feature Development And Integration Collaborate with feature and product development teams to design and implement new features and integrations with other data product value chain systems and tools. Assist in defining specifications and requirements for feature enhancements and new integrations. Automation And Innovation Identify opportunities for process automation and implement solutions to enhance operational efficiency. Innovate and contribute to the development of new automation tools and technologies. Incident Management Support incident management processes, including triaging and resolving technical challenges efficiently. Qualifications Bachelors degree in Computer Science, Information Technology, or a related field. Experience supporting and maintaining applications deployed on Kubernetes using Helm charts and Docker images. Understanding of RDS, GitHub Actions, and CI/CD pipelines. Proficiency in Python and YAML scripting for automation and configuration. Excellent problem-solving skills and the ability to support users effectively. Strong verbal and written communication skills. Preferred Qualifications Experience working with Kubernetes (k8s). Knowledge of data and analytical products like Immuta, Apache Ranger, Collibra, Snowflake, PostgreSQL, Redshift, Hive, Iceberg, dbt, AWS Lambda, AWS Glue, and Power BI. Familiarity with cloud environments such as AWS. Knowledge of additional scripting languages or tools is a plus. Beneficial Experience Exposure to Starburst or other data virtualization technologies like Dremio, Trino, Presto, and Athena. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Description JMAN Group is a growing technology-enabled management consultancy that empowers organizations to create value through data. Founded in 2010, we are a team of 450+ consultants based in London, UK, and a team of 300+ engineers in Chennai, India. Having delivered multiple projects in the US, we are now opening a new office in New York to help us support and grow our US client base. We approach business problems with the mindset of a management consultancy and the capabilities of a tech company. We work across all sectors, and have in depth experience in private equity, pharmaceuticals, government departments and high-street chains. Our team is as cutting edge as our work. We take pride for ourselves on being great to work with – no jargon or corporate-speak, flexible to change and receptive of feedback. We have a huge focus on investing in the training and professional development of our team, to ensure they can deliver high quality work and shape our journey to becoming a globally recognised brand. The business has grown quickly in the last 3 years with no signs of slowing down. Technical specifications 5 + years of experience in data platform builds. Familiarity with multi cloud data warehousing solutions (Snowflake, Redshift, Databricks, Fabric, AWS Glue, Azure Data Factory, Synapse, Matillion,DBT ). Proficient in SQL, Apache Spark / Python programming languages. Good to have skills includes Data visualization using Power BI, Tableau, or Looker, and familiarity with full-stack technologies. Experience with containerization technologies (e.g., Docker, Kubernetes) Experience with CI/CD pipelines and DevOps methodologies. Ability to work independently, adapt to changing priorities, and learn new technologies quickly. Experience in implementing or working with data governance frameworks and practices to ensure data integrity and regulatory compliance. Knowledge of data quality tools and practices. Responsibilities Design and implement data pipelines using ETL/ELT tools and techniques. Configure and manage data storage solutions, including relational databases, data warehouses, and data lakes. Develop and implement data quality checks and monitoring processes. Automate data platform deployments and operations using scripting and DevOps tools (e.g., Git, CI/CD pipeline). Ensuring compliance with data governance and security standards throughout the data platform development process. Troubleshoot and resolve data platform issues promptly and effectively. Collaborate with the Data Architect to understand data platform requirements and design specifications. Assist with data modelling and optimization tasks. Work with business stakeholders to translate their needs into technical solutions. Document the data platform architecture, processes, and best practices. Stay up to date with the latest trends and technologies in full stack development, data engineering, and DevOps. Proactively suggest improvements and innovations for the data platform. Requirements Required Skillset: ETL or ELT : AWS Glue/ Azure Data Factory/ Synapse/ Matillion/dbt. Data Warehousing : Azure SQL Server/Redshift/Big Query/Databricks/Snowflake/fabric (Anyone - Mandatory). Data Visualization : Looker, Power BI, Tableau. SQL and Apache Spark / Python programming languages Containerization technologies (e.g., Docker, Kubernetes) Cloud Experience : AWS/Azure/GCP. Scripting and DevOps tools (e.g., Git, CI/CD pipeline) Show more Show less

Posted 6 days ago

Apply

3.0 years

4 - 7 Lacs

Chennai

On-site

GlassDoor logo

- 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities: - Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business - Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms - Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation - Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture - Design, build and own all the components of a high-volume data warehouse end to end. - Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) - Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers - Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources - Own the functional and nonfunctional scaling of software systems in your ownership area. - Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About the team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 6 days ago

Apply

3.0 - 6.0 years

0 Lacs

Jaipur

On-site

GlassDoor logo

ABOUT HAKKODA Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are looking for a skilled and motivated Data Analyst / Data Engineer to join our growing data team in Jaipur. The ideal candidate should have hands-on experience with SQL, Python, Power BI , and familiarity with Snowflake is a strong advantage. You will play a key role in building data pipelines, delivering analytical insights, and enabling data-driven decision-making across the organization. Role Description: Develop and manage robust data pipelines and workflows for data integration, transformation, and loading. Design, build, and maintain interactive Power BI dashboards and reports based on business needs. Optimize existing Power BI reports for performance, usability, and scalability . Write and optimize complex SQL queries for data analysis and reporting. Use Python for data manipulation, automation, and advanced analytics where applicable. Collaborate with business stakeholders to understand requirements and deliver actionable insights . Ensure high data quality, integrity, and governance across all reporting and analytics layers. Work closely with data engineers, analysts, and business teams to deliver scalable data solutions . Leverage cloud data platforms like Snowflake for data warehousing and analytics (good to have). Qualifications 3–6 years of professional experience in data analysis or data engineering. Bachelor’s degree in computer science , Engineering, Data Science, Information Technology , or a related field. Strong proficiency in SQL with the ability to write complex queries and perform data modeling. Hands-on experience with Power BI for data visualization and business intelligence reporting. Programming knowledge in Python for data processing and analysis. Good understanding of ETL/ELT , data warehousing concepts, and cloud-based data ecosystems. Excellent problem-solving skills , attention to detail, and analytical thinking. Strong communication and interpersonal skills to work effectively with cross-functional teams . Preferred / Good to Have Experience working with large datasets and cloud platforms like Snowflake, Redshift, or BigQuery. Familiarity with workflow orchestration tools (e.g., Airflow) and version control systems (e.g., Git). Power BI Certification (e.g., PL-300: Microsoft Power BI Data Analyst). Exposure to Agile methodologies and end-to-end BI project life cycles. Benefits: Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? \uD83D\uDE80 \uD83D\uDCBB Apply today\uD83D\uDC47 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here.

Posted 6 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Primary Responsibilities Cloud Expertise: Familiarity or hands-on experience with AWS and Google Cloud Platform (GCP) technologies to support data transformation, data structures, metadata management, dependency tracking, and workload orchestration. Collaboration & Independence: Self-motivated and capable of supporting the data needs of multiple teams, systems, and products within Amway’s data ecosystem. Big Data & Distributed Systems: Strong understanding of distributed systems for large-scale data processing and analytics, with a proven track record of manipulating, processing, and deriving insights from large, complex, and disconnected datasets. Database Proficiency: Advanced knowledge of relational databases and SQL, with working experience across a variety of platforms including Microsoft SQL Server and Oracle to enhance analytics capabilities. Software Back-end and Front-end: Familiarity with programing languages that support back-end and front-end, specially node.js, React.js. Growth Mindset: A passion for continuous learning and a desire to help evolve our capabilities to support Machine Learning and advanced analytics initiatives. Required skills and competencies 8+ years of IT experience and at least 5+ years as with Cloud Based Technology familiarity of Node.js and React.js programing knowledge. Expertise in SQL (PLSQL, Big Query, Redshift, GraphSQL) Familiarity in Python programing Knowledge of Mango DB, Apache kafka, Pyspark Must be competent with Confluence, Jira, Github, and other AWS DevOps tools. Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: ETL Talend Lead Location: Bangalore, Hyderabad, Chennai, Pune Work Mode: Hybrid Job Type: Full-Time Shift Timings: 2:00 - 11:00 PM Years Of Experience: 8 - 15 years ETL Development Lead: Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Senior Data Analyst – AdTech (Team Lead) Location: Hyderabad Experience Level: 4–6 Years Employment Type: Full-time Shift Timings: 5PM - 2AM IST About The Role We are looking for a highly experienced and hands-on Senior Data Analyst (AdTech) to lead our analytics team. This role is ideal for someone with a strong background in log-level data handling , cross-platform data engineering , and a solid command of modern BI tools . You'll play a key role in building scalable data pipelines, leading analytics strategy, and mentoring a team of analysts. Key Responsibilities Lead and mentor a team of data analysts, ensuring quality delivery and technical upskilling. Design, develop, and maintain scalable ETL/ELT pipelines using GCP tools (BigQuery, Dataflow, Cloud Composer, Cloud Functions, Pub/Sub). Ingest and process log-level data from platforms like Google Ad Manager, Google Analytics (GA4/UA), DV360, and other advertising and marketing tech sources. Build and optimize data pipelines from diverse sources via APIs, cloud connectors, and third-party tools (e.g., Supermetrics, Fivetran, Stitch). Integrate and manage data across multiple cloud platforms and data warehouses such as BigQuery, Snowflake, DOMO, and AWS (Redshift, S3). Own the creation of data models, data marts, and analytical layers to support dashboards and deep-dive analyses. Build and maintain scalable, intuitive dashboards using Looker Studio, Tableau, Power BI, or Looker. Partner with engineering, product, revenue ops, and client teams to gather requirements and drive strategic insights from data. Ensure data governance, security, and quality standards are followed across the analytics ecosystem. Required Qualifications 4–6 years of experience in data analytics or data engineering roles, with at least 1–2 years in a leadership capacity. Deep expertise working with log-level AdTech data—Google Ad Manager, Google Analytics, GA4, programmatic delivery logs, and campaign-level data. Strong knowledge of SQL and Google BigQuery for large-scale data querying and transformation. Hands-on experience building data pipelines using GCP tools (Dataflow, Composer, Cloud Functions, Pub/Sub, Cloud Storage). Proven experience integrating data from various APIs and third-party connectors. Experience working with multiple data warehouses: Snowflake, DOMO, AWS Redshift, etc. Strong skills in data visualization tools: Looker Studio, Tableau, Power BI, or Looker. Excellent stakeholder communication and documentation skills. Preferred Qualifications Scripting experience in Python or JavaScript for automation and custom ETL development. Familiarity with version control (e.g., Git), CI/CD pipelines, and workflow orchestration. Exposure to privacy regulations and consent-based data handling in digital advertising (GDPR, CCPA). Experience working in agile environments and managing delivery timelines across multiple stakeholders. Show more Show less

Posted 6 days ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Pune

Work from Office

Naukri logo

We are looking for a highly skilled and experienced Data Engineering Manager to lead our data engineering team. The ideal candidate will possess a strong technical background, strong project management abilities, and excellent client handling/stakeholder management skills. This role requires a strategic thinker who can drive the design, development and implementation of data solutions that meet our clients needs while ensuring the highest standards of quality and efficiency. Job Responsibilities Technology Leadership Lead guide the team independently or with little support to design, implement deliver complex cloud-based data engineering / data warehousing project assignments Solution Architecture & Review Expertise in conceptualizing solution architecture and low-level design in a range of data engineering (Matillion, Informatica, Talend, Python, dbt, Airflow, Apache Spark, Databricks, Redshift) and cloud hosting (AWS, Azure) technologies Managing projects in fast paced agile ecosystem and ensuring quality deliverables within stringent timelines Responsible for Risk Management, maintaining the Risk documentation and mitigations plan. Drive continuous improvement in a Lean/Agile environment, implementing DevOps delivery approaches encompassing CI/CD, build automation and deployments. Communication & Logical Thinking Demonstrates strong analytical skills, employing a systematic and logical approach to data analysis, problem-solving, and situational assessment. Capable of effectively presenting and defending team viewpoints, while securing buy-in from both technical and client stakeholders. Handle Client Relationship Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Education BE/B.Tech Master of Computer Application Work Experience Should have expertise and 8+ years of working experience in at least two ETL tools among Matillion, dbt, pyspark, Informatica, and Talend Should have expertise and working experience in at least two databases among Databricks, Redshift, Snowflake, SQL Server, Oracle Should have strong Data Warehousing, Data Integration and Data Modeling fundamentals like Star Schema, Snowflake Schema, Dimension Tables and Fact Tables. Strong experience on SQL building blocks. Creating complex SQL queries and Procedures. Experience in AWS or Azure cloud and its service offerings Aware of techniques such as: Data Modelling, Performance tuning and regression testing Willingness to learn and take ownership of tasks. Excellent written/verbal communication and problem-solving skills and Understanding and working experience on Pharma commercial data sets like IQVIA, Veeva, Symphony, Liquid Hub, Cegedim etc. would be an advantage Hands-on in scrum methodology (Sprint planning, execution and retrospection) Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Problem Solving Lifescience Knowledge Communication Agile PySpark Data Modelling Designing technical architecture AWS Data Pipeline

Posted 6 days ago

Apply

89.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Full-time Company Description GFK - Growth from Knowledge. For over 89 years, we have earned the trust of our clients around the world by solving critical questions in their decision-making process. We fuel their growth by providing a complete understanding of their consumers’ buying behavior, and the dynamics impacting their markets, brands and media trends. In 2023, GfK combined with NIQ, bringing together two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights - delivered with advanced analytics through state-of-the-art platforms - GfK drives “Growth from Knowledge”. Job Description It's an exciting time to be a builder. Constant technological advances are creating an exciting new world for those who understand the value of data. The mission of NIQ’s Media Division is to turn NIQ into the global leader that transforms how consumer brands plan, activate and measure their media activities. Recombine is the delivery area focused on maximising the value of data assets in our NIQ Media Division. We apply advanced statistical and machine learning techniques to unlock deeper insights, whilst integrating data from multiple internal and external sources. Our teams develop data integration products across various markets and product areas, delivering enriched datasets that power client decision-making. Role Overview We are looking for a Principal Software Engineer for our Recombine delivery area to provide technical leadership within our development teams, ensuring best practices, architectural coherence, and effective collaboration across projects. This role is ideal for a highly experienced engineer who can bridge the gap between data engineering, data science, and software engineering, helping teams build scalable, maintainable, and well-structured data solutions. As a Principal Software Engineer, you will play a hands-on role in designing and implementing solutions while mentoring developers, influencing technical direction, and driving best practices in software and data engineering. This role includes line management responsibilities, ensuring the growth and development of team members. The role will be working within an AWS environment, leveraging the power of cloud-native technologies and modern data platforms Key Responsibilities Technical Leadership & Architecture Act as a technical architect, ensuring alignment between the work of multiple development teams in data engineering and data science. Design scalable, high-performance data processing solutions within AWS, considering factors such as governance, security, and maintainability. Drive the adoption of best practices in software development, including CI/CD, testing strategies, and cloud-native architecture. Work closely with Product Owners to translate business needs into technical solutions. Hands-on Development & Technical Excellence Lead by example through high-quality coding, code reviews, and proof-of-concept development. Solve complex engineering problems and contribute to critical design decisions. Ensure effective use of AWS services, including AWS Glue, AWS Lambda, Amazon S3, Redshift, and EMR. Develop and optimise data pipelines, data transformations, and ML workflows in a cloud environment. Line Management & Team Development Provide line management to engineers, ensuring their professional growth and development. Conduct performance reviews, set development goals, and mentor team members to enhance their skills. Foster a collaborative and high-performing engineering culture, promoting knowledge sharing and continuous improvement beyond team boundaries. Support hiring, onboarding, and career development initiatives within the engineering team. Collaboration & Cross-Team Coordination Act as the technical glue between data engineers, data scientists, and software developers, ensuring smooth integration of different components. Provide mentorship and guidance to developers, helping them level up their skills and technical understanding. Work with DevOps teams to improve deployment pipelines, observability, and infrastructure as code. Engage with stakeholders across the business, translating technical concepts into business-relevant insights. Governance, Security & Data Best Practices Champion data governance, lineage, and security across the platform. Advocate for and implement scalable data architecture patterns, such as Data Mesh, Lakehouse, or event-driven pipelines. Ensure compliance with industry standards, internal policies, and regulatory requirements. Qualifications Requirements & Experience Strong software engineering background with experience in designing and building production-grade applications in Python, Scala, Java, or similar languages. Proven experience with AWS-based data platforms, specifically AWS Glue, Redshift, Athena, S3, Lambda, and EMR. Expertise in Apache Spark and AWS Lake Formation, with experience building large-scale distributed data pipelines. Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Cloud experience in AWS, including containerisation (Docker, Kubernetes, ECS, EKS) and infrastructure as code (Terraform, CloudFormation). Strong knowledge of modern software architecture, including microservices, event-driven systems, and distributed computing. Experience leading teams in an agile environment, with a strong understanding of CI/CD pipelines, automated testing, and DevOps practices. Excellent problem-solving and communication skills, with the ability to engage with both technical and non-technical stakeholders. Proven line management experience, including mentoring, career development, and performance management of engineering teams. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Technology @Dream11: Technology is at the core of everything we do. Our technology team helps us deliver a mobile-first experience across platforms (Android & iOS) while managing over 700 million rpm (requests per minute) at peak with a user concurrency of over 16.5 million. We have over 190+ micro-services written in Java and backed by a Vert.x framework. These work with isolated product features with discrete architectures to cater to the respective use cases. We work with terabytes of data, the infrastructure for which is built on top of Kafka, Redshift, Spark, Druid, etc. and it powers a number of use cases like Machine Learning and Predictive Analytics. Our tech stack is hosted on AWS, with distributed systems like Cassandra, Aerospike, Akka, Voltdb, Ignite, etc. Your Role: Working with cross-functional teams to define, design and launch new features Designing, maintaining, high performance, reusable and reliable code Analysing design for efficient development planning Identifying and resolving performance bottlenecks Qualifiers: 3+ years of hands-on experience with Javascript / Typescript Experience in React / React Native / Android / iOS Ecosystem Strong problem-solving skill and reasoning ability About Dream Sports: Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , a premier sports content & commerce platform and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/ Dream11 is the world’s largest fantasy sports platform with 230 million users playing fantasy cricket, football, basketball & hockey on it. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Job Title- Analyst-Reporting & QA Location- New Delhi What do you need to know about us? M+C Saatchi Performance is an award-winning global digital media agency, connecting brands to people. We deliver business growth for our clients through effective, measurable, and evolving digital media strategies. About the Role: We are looking for a highly skilled Analyst- Reporting & QA with a deep understanding of digital and mobile media to join our Reporting and QA team. This role will focus on enabling our clients to meet their media goals by ensuring data accuracy and delivering actionable insights into media performance through our reporting tools. The ideal candidate will have strong technical skills, be detail-oriented, and have experience in digital/mobile media attribution and reporting. Core Responsibilities: ETL & Data Automation: Use Matillion to streamline data processes, ensuring efficient and reliable data integration across all reporting systems. Data Quality Assurance: Verify and validate data accuracy within Power BI dashboards, proactively identifying and addressing discrepancies to maintain high data integrity. Dashboard Development: Build, maintain, and optimize Power BI dashboards to deliver real-time insights that help clients understand the performance of their digital and mobile media campaigns. Media Performance Insights: Collaborate closely with media teams to interpret data, uncover trends and provide actionable insights that support clients in optimizing their media investments. Industry Expertise: Gain and apply in-depth knowledge of digital and mobile media, attribution models, and reporting frameworks to deliver valuable perspectives on media performance. Tools & Platforms Expertise: Utilize tools such as GA4, platform reporting systems, first-party data analytics, and mobile measurement partners (MMPs) to support comprehensive media insights for clients. Data Analysis and Measurement: Knowledge of incrementality measurement methodologies and the ability to apply these concepts to evaluate the effectiveness of marketing campaigns. Strong understanding of various marketing attribution models and experience with relevant tools and platforms. Agency Experience: Prior experience working on the agency side, with a strong understanding of agency workflows and client service. Analysis and Communication: Communicate attribution and incrementality insights to clients in a clear and concise manner, demonstrating the value of agency services. Qualifications and Experience: Education: Bachelor’s degree, preferably in Computer Science, Marketing, or a related field. Experience: 3-4 years in a similar role, with substantial exposure to data analysis, reporting, and the digital/mobile media landscape. Technical Skills: Proficiency in ETL tools (preferably Matillion), Power BI, and data quality control. Industry Knowledge: Understanding of digital and mobile media, with familiarity in attribution, reporting practices, and performance metrics. Analytical Skills: Skilled in interpreting complex data, generating actionable insights, and presenting findings effectively to non-technical stakeholders. Communication: Excellent communicator with a proven ability to collaborate effectively across cross-functional teams and with clients. Tools & Platforms: Proficiency in GA4, platform reporting, first-party data analysis, and mobile measurement partners (MMPs). Desired Skills: Background in a media agency environment. Familiarity with cloud-based data platforms (e.g., AWS, Redshift). Familiarity with advanced analytics and data visualization tools beyond Power BI. Strong collaboration skills and the ability to work independently. What Can you look forward to: Being a part of the world’s largest independent advertising holding group. Family Health Insurance Coverage. Flexible Working Hours. Regular events including Reece Lunch & indoor games. Employee Training/Learning Programs About M+C Saatchi Performance: M+C Saatchi Performance has pledged its commitment to create a company that values difference, with an inclusive culture brought to life through equity with business-wide activity across people, culture, industry and society. As part of this, M+C Saatchi Performance continues to be an Equal Opportunity Employer which does not and shall not discriminate, celebrates diversity and bases all hiring and promotion decisions solely on merit, without regard for any personal characteristics. All employee information is kept confidential according to General Data Protection Regulation (GDPR). M+C Saatchi Group was founded in 1995 and is now the biggest Independent creative agency group in the World. Founded on one core principle, Brutal Simplicity. Show more Show less

Posted 6 days ago

Apply

2.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Data Scientist II BANGALORE, KARNATAKA / TECH – DATA SCIENCE / FULL TIME EMPLOYEE About the Team Our Data Science team is the Avengers to our S.H.I.E.L.D 🛡. And why not? We are the ones who assemble during the toughest challenges and devise creative solutions, building intelligent systems for millions of our users looking at a thousand different categories of products. We’ve barely scratched the surface, and have amazing challenges in charting the future of commerce for Bharat. Our typical day involves dealing with fraud detection, inventory optimization, and platform venularization. As Data Scientist, you will navigate uncharted territories with us, discovering new paths to creating solutions for our users.🔍 You will be at the forefront of interesting challenges and solve unique customer problems in an untapped market. But wait – there’s more to us. Our team is huge on having a well-rounded personal and professional life. When we aren't nose-deep in data, you will most likely find us belting “Summer of 69” at the nearest Karaoke bar, or debating who the best Spider-Man is: Maguire, Garfield, or Holland? You tell us ☺️ About the Role Love deep data? Love discussing solutions instead of problems? Then you could be our next Data Scientist. In a nutshell, your primary responsibility will be enhancing the productivity and utilization of the generated data. Other things you will do are: -Work closely with the business stakeholders -Transform scattered pieces of information into valuable data -Share and present your valuable insights with peers What You Will Do Develop models and run experiments to infer insights from hard data Improve our product usability and identify new growth opportunities Understand reseller preferences to provide them with the most relevant products Designing discount programs to help our resellers sell more Help resellers better recognize end-customer preferences to improve their revenue Use data to identify bottlenecks that will help our suppliers meet their SLA requirements Model seasonal demand to predict key organizational metrics Mentor junior data scientists in the team What You Will Need Bachelor's/Master's degree in computer science (or similar degrees) 2-4 years of experience as a Data Scientist in a fast-paced organization, preferably B2C Familiarity with Neural Networks, Machine Learning , etc Familiarity with tools like SQL, R, Python , etc. Strong understanding of Statistics and Linear Algebra Strong understanding of hypothesis/model testing and ability to identify common model testing errors Experience designing and running A/B tests and drawing insights from them Proficiency in machine learning algorithms Excellent analytical skills to fetch data from reliable sources to generate accurate insights Experience in tech and product teams is a plus Bonus points for: -Experience in working on personalization or other ML problems -Familiarity with Big Data tech stacks like Apache Spark, Hadoop, Redshift About It is India’s fastest-growing e-commerce company. We started in 2015 with the idea of helping mom & pop stores to sell online. Today, 5% of Indian households shop with us on any given day. We’ve helped over 15 million individual entrepreneurs start online businesses with zero investment. We’re democratizing internet commerce by offering a 0% commission model for sellers on our platform — a first for India. We aim to become the e-commerce destination for Bharat. We’re currently valued at $4.9 billion with marquee investors supporting our vision. Some of them include Sequoia Capital, Softbank, Fidelity, Proses Ventures, Facebook, and Elevation Capital. We were also featured in Y Combinator’s 2021 Top Companies List and were the only Indian startup to make it to Fast Company’s The World’s 50 Most Innovative Companies in 2020. We ranked 6th in LinkedIn's Top Startups List 2021. Our strongest asset is our people. We have gender-neutral and inclusive policies to promote our people-first culture. Our Mission Democratize internet commerce for everyone Our Vision Enable 100M small businesses in India to succeed online Show more Show less

Posted 6 days ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About The Advanced Analytics Team The central Advanced Analytics team at the Abbott Established Pharma Division’s (EPD) headquarters in Basel helps define and lead the transformation towards becoming a global, data-driven company with the help of data and advanced technologies (e.g., Machine Learning, Deep Learning, Generative AI, Computer Vision). To us, Advanced Analytics is an important lever to reach our business targets, now and in the future; It helps differentiate ourselves from our competition and ensure sustainable revenue growth at optimal margins. Hence the central AA team is an integral part of the Strategy Management Office at EPD that has a very close link and regular interactions with the EPD Senior Leadership Team. Primary Job Function With the above requirements in mind, EPD is looking to fill a role of a Cloud Engineer reporting to the Head of AA Product Development. The Cloud Engineer will be responsible for developing applications leveraging AWS services. This role involves leading cloud initiatives, ensuring robust cloud infrastructure, and driving innovation in cloud technologies to support the business's advanced analytics needs. Core Job Responsibilities Support the development and maintenance of company-wide frameworks and libraries that enable faster, better, and more informed decision-making within the business, creating significant business value from data & analytics. Ensure data availability and accessibility for prioritized Advanced Analytics scope, and maintain stable, scalable, and modular data science pipelines from data exploration to deployment. Acquire, ingest, and process data from multiple sources and systems into our cloud platform (AWS), ensuring data integrity and security. Collaborate with data scientists to map data fields to hypotheses, and curate, wrangle, and prepare data for advanced analytical models. Implement and manage robust security measures to ensure compliant handling and management of data, including access strategies aligned with Information Security, Cyber Security, and Data Privacy principles. Develop and deploy smart automation tools based on cloud technologies, aligned with business priorities and needs. Oversee the timely delivery of Advanced Analytics solutions in coordination with the rest of the team and per requirements and timelines, ensuring alignment with business goals. Collaborate closely with the Data Science team and AI Engineers to understand platform needs and lead the development of solutions that support their work. Troubleshoot and resolve issues related to the AWS platform, ensuring minimal downtime and optimal performance. Define and document best practices and strategies regarding application deployment and infrastructure maintenance. Drive continuous improvement of the AWS Cloud platform by contributing and implementing new ideas and processes. Supervisory/Management Responsibilities Direct Reports: None. Indirect Reports: None. Position Accountability/Scope The Cloud Engineer is accountable for delivering targeted business impact per initiative in collaboration with key stakeholders. This role involves significant responsibility for the architecture and management of Abbott's strategic cloud platforms and AI/AA programs, enabling faster, better, and more informed decision-making within the business. Minimum Education Master in relevant field (e.g., computer science, electrical engineering) Minimum Experience/Training Required At least 3-5 years of relevant experience, with a strong track record in building solutions/applications using AWS services Proven ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets. Proficiency in multiple programming languages – Javascript, Python, Scala, PySpark or Java. Extensive knowledge and experience with various database technologies, including distributed processing frameworks, relational databases, MPP databases, and NoSQL data stores. Deep understanding of Information Security principles to ensure compliant handling and management of data. Significant experience with cloud platforms, preferably AWS and its ecosystem. Advanced knowledge of development in CICD (Continuous Integration and Continuous Delivery) environments. Strong background in data warehousing / ETL tools. Proficiency in DevOps practices and tools such as Jenkins, Terraform, etc. Proficiency in serverless architecture and services like AWS Lambda. Understanding of security best practices and implementation in cloud environments. Ability to understand business objectives and create cloud-based solutions to meet those objectives. Result-driven, analytical, and creative thinker. Proven ability to work with cross-functional teams and bridge the gap between business and data science. Fluency in English is a must; additional languages are a plus. Additional Technical Skills Experience with front-end frameworks preferably React JS. Knowledge of back-end frameworks like Django, Flask, or Node.js. Familiarity with database technologies such as RedShift, MySQL, or DynamoDB. Understanding of RESTful API design and development. Experience with version control systems like CodeCommit. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Amex GBT is a place where colleagues find inspiration in travel as a force for good and – through their work – can make an impact on our industry. We’re here to help our colleagues achieve success and offer an inclusive and collaborative culture where your voice is valued. We are looking for an experienced Data ETL Developer / BI Engineer who loves solving complex problems across a full spectrum of data & technologies. You will lead the building effort of GBT's new BI platform and manage the legacy platform to seamlessly support our business function around data and analytics. You will create dashboards, databases, and other platforms that allow for the efficient collection and evaluation of BI data. What You’ll Do on a Typical Day: Design, implement, and maintain systems that collect and analyze business intelligence data. Design and architect an analytical data store or cluster for the enterprise and implement data pipelines that extract, transform, and load data into an information product that helps the organization reach strategic goals. Create physical and logical data models to store and share data that can be easily consumed for different BI needs. Develop Tableau dashboards and features. Create scalable and high-performance data load and management process to make data available near real-time to support on-demand analytics and insights. Translate complex technical and functional requirements into detailed designs. Investigate and analyze alternative solutions to data storing, processing, etc., to ensure the most streamlined approaches are implemented. Serve as a mentor to junior staff by conducting technical training sessions and reviewing project outputs Design & develop, and maintain a data model implementing ETL processes. Manage and maintain the database, warehouse, & cluster with other dependent infrastructure. Work closely with data, products, and another team to implement data analytic solutions. Support production application and Incident management. Help define data governance policies and support data versioning processes Maintain security and data privacy by working closely with the Data Protection Officer internally. Analyze a vast number of data stores and uncover insights What We’re Looking For: Degree in computer sciences or engineering Overall, 3-5 years of experience in data & data warehouse, ETL, and data modeling. 2+ years of experience working and managing large data stores, complex data pipelines, and BI solutions. Strong experience in SQL and writing complex queries. Hands-on experience with Tableau development. Hands-on working experience on Redshift, data modeling, data warehouse, ETL tool, Python, and Shell scripting. Understanding of data warehousing and data modeling techniques Strong data engineering skills on the AWS Cloud Platform are essential. Knowledge of Linux, SQL, and any scripting language Good interpersonal skills and a positive attitude Experience in travel data would be a plus. Location Gurgaon, India The #TeamGBT Experience Work and life: Find your happy medium at Amex GBT. Flexible benefits are tailored to each country and start the day you do. These include health and welfare insurance plans, retirement programs, parental leave, adoption assistance, and wellbeing resources to support you and your immediate family. Travel perks: get a choice of deals each week from major travel providers on everything from flights to hotels to cruises and car rentals. Develop the skills you want when the time is right for you, with access to over 20,000 courses on our learning platform, leadership courses, and new job openings available to internal candidates first. We strive to champion Inclusion in every aspect of our business at Amex GBT. You can connect with colleagues through our global INclusion Groups, centered around common identities or initiatives, to discuss challenges, obstacles, achievements, and drive company awareness and action. And much more! All applicants will receive equal consideration for employment without regard to age, sex, gender (and characteristics related to sex and gender), pregnancy (and related medical conditions), race, color, citizenship, religion, disability, or any other class or characteristic protected by law. Click Here for Additional Disclosures in Accordance with the LA County Fair Chance Ordinance. Furthermore, we are committed to providing reasonable accommodation to qualified individuals with disabilities. Please let your recruiter know if you need an accommodation at any point during the hiring process. For details regarding how we protect your data, please consult the Amex GBT Recruitment Privacy Statement. What if I don’t meet every requirement? If you’re passionate about our mission and believe you’d be a phenomenal addition to our team, don’t worry about “checking every box;" please apply anyway. You may be exactly the person we’re looking for! Show more Show less

Posted 6 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position: Data Architect Skills: GCP, DA, Development, SQL, Python, Big Query, Dataproc, Dataflow, Data Pipelines. Exp: 10+ Yrs Roles and Responsibilities • 10+ years of relevant work experience, including previous experience leading Data related projects in the field of Reporting and Analytics. • Design, build & maintain scalable data lake and data warehouse in cloud ( GCP ) • Expertise in gathering business requirements, analysing business needs, defining the BI/DW architecture to support and help deliver technical solutions to complex business and technical requirements • Creating solution prototype and participating in technology selection. Perform POC and technical presentations • Architect, develop and test scalable data warehouses and data pipelines architecture in Cloud Technologies ( GCP ) Experience in SQL and No SQL DBMS like MS SQL Server, MySQL, PostgreSQL, DynamoDB, Cassandra, MongoDB. • Design and develop scalable ETL processes, including error handling. • Expert in Query and program languages MS SQL Server, T-SQL, PostgreSQL, MY SQL, Python, R. • Preparing data structures for advanced analytics and self-service reporting using MS SQL, SSIS, SSRS • Write scripts for stored procedures, database snapshots backups and data archiving. • Experience with any of these cloud-based technologies: o PowerBI/Tableau, Azure Data Factory, Azure Synapse, Azure Data Lake o AWS RedShift, Glue, Athena, AWS Quicksight o Google Cloud Platform Good to have: • Agile development environment pairing DevOps with CI/CD pipelines • AI/ML background Interested candidates share cv to dikshith.nalapatla@motivitylabs.com Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

Remote

Linkedin logo

🔥 Sr Java Engineer - WFH/Remote This is full remote working opportunity . If you are interested and fulfill the below mentioned criteria then send me the following details 1. Email id 2. Years of Relevant experience 3. CCTC, ECTC 4. Notice period Must haves 5+ years of experience in web development in similar environments; Bachelor’s degree in Computer Science, Information Security, or a related technology field; Strong knowledge of Java 8 and 17, Spring, and Spring Boot; Experience with microservices and events; Great experience and passion for creating documentation for code and business processes; Expertise in architectural design and code review, with a strong grasp of SOLID principles; Skilled in gathering and analyzing complex requirements and business processes; Contribute to the development of our internal tools and reusable architecture; Experience creating optimized code and performance improvement for production systems and applications; Experience debugging, refactoring applications, and replicating scenarios to solve issues and understand the business; Familiarity with unit and system testing frameworks (e.g., JUnit, Mockito); Proficient in using Git; Dedicated: own the apps you and your team are developing and take quality very seriously; Problem Solving: proactively solve problems before they can become real problems; Constantly upgrading your skill set and applying those practices; Upper-Intermediate English level. Nice to haves Experience with Test Driven Development; Experience with logistics software (delivery, transportation, route planning), RSA domain; Experience with AWS, like ECS, SNS, SQS, and RedShift. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

About the Team Data is at the foundation of DoorDash success. The Data Engineering team builds database solutions for various use cases including reporting, product analytics, marketing optimization and financial reporting. By implementing pipelines, data structures, and data warehouse architectures; this team serves as the foundation for decision-making at DoorDash. About the Role DoorDash is looking for a Senior Data Engineer to be a technical powerhouse to help us scale our data infrastructure, automation and tools to meet growing business needs. You're excited about this opportunity because you will… Work with business partners and stakeholders to understand data requirements Work with engineering, product teams and 3rd parties to collect required data Design, develop and implement large scale, high volume, high performance data models and pipelines for Data Lake and Data Warehouse Develop and implement data quality checks, conduct QA and implement monitoring routines Improve the reliability and scalability of our ETL processes Manage a portfolio of data products that deliver high-quality, trustworthy data Help onboard and support other engineers as they join the team We're excited about you because… 5+ years of professional experience 3+ years experience working in data engineering, business intelligence, or a similar role Proficiency in programming languages such as Python/Java 3+ years of experience in ETL orchestration and workflow management tools like Airflow, Flink, Oozie and Azkaban using AWS/GCP Expert in Database fundamentals, SQL and distributed computing 3+ years of experience with the Distributed data/similar ecosystem (Spark, Hive, Druid, Presto) and streaming technologies such as Kafka/Flink. Experience working with Snowflake, Redshift, PostgreSQL and/or other DBMS platforms Excellent communication skills and experience working with technical and non-technical teams Knowledge of reporting tools such as Tableau, Superset and Looker Comfortable working in fast paced environment, self starter and self organizing Ability to think strategically, analyze and interpret market and consumer information You must be located near one of our engineering hubs indicated above Notice to Applicants for Jobs Located in NYC or Remote Jobs Associated With Office in NYC Only We use Covey as part of our hiring and/or promotional process for jobs in NYC and certain features may qualify it as an AEDT in NYC. As part of the hiring and/or promotion process, we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound from August 21, 2023, through December 21, 2023, and resumed using Covey Scout for Inbound again on June 29, 2024. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: Covey About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. We use Covey as part of our hiring and/or promotional process for jobs in certain locations. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: https://getcovey.com/nyc-local-law-144 To request a reasonable accommodation under applicable law or alternate selection process, please inform your recruiting contact upon initial connection. Show more Show less

Posted 6 days ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

This role is for one of the Weekday's clients Min Experience: 4 years Location: Ahmedabad JobType: full-time We are seeking a highly skilled Senior Database Administrator with 5-8 years of experience in data engineering and database management. The ideal candidate will have a strong foundation in data architecture, modeling, and pipeline orchestration. Hands-on experience with modern database technologies and exposure to generative AI tools in production environments will be a significant advantage. This role involves leading efforts to streamline data workflows, improve automation, and deliver high-impact insights across the organization. Requirements Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines (ETL/ELT) across multiple database systems. Architect and maintain high-availability, secure, and scalable data storage solutions. Utilize generative AI tools to automate data workflows and enhance system capabilities. Collaborate with engineering, analytics, and data science teams to fulfill data requirements and optimize data delivery. Implement and monitor data quality standards, governance practices, and compliance protocols. Document data architectures, systems, and processes for transparency and maintainability. Apply data modeling best practices to support optimal storage and querying performance. Continuously research and integrate emerging technologies to advance the data infrastructure. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or related field. 5-8 years of experience in database administration and data engineering for large-scale systems. Proven experience in designing and managing relational and non-relational databases. Mandatory Skills: SQL - Proficient in advanced queries, performance tuning, and database management. NoSQL - Experience with at least one NoSQL database such as MongoDB, Cassandra, or CosmosDB. Hands-on experience with at least one of the following cloud data warehouses: Snowflake, Redshift, BigQuery, or Microsoft Fabric. Cloud expertise - Strong experience with Azure and its data services. Working knowledge of Python for scripting and data processing (e.g., Pandas, PySpark). Experience with ETL tools such as Apache Airflow, Microsoft Fabric, Informatica, or Talend. Familiarity with generative AI tools and their integration into data pipelines. Preferred Skills & Competencies: Deep understanding of database performance, tuning, backup, recovery, and security. Strong knowledge of data governance, data quality management, and metadata handling. Experience with Git or other version control systems. Familiarity with AI/ML-driven data solutions is a plus. Excellent problem-solving skills and the ability to resolve complex database issues. Strong communication skills to collaborate with cross-functional teams and stakeholders. Demonstrated ability to manage projects and mentor junior team members. Passion for staying updated with the latest trends and best practices in database and data engineering technologies. Show more Show less

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

Remote

Linkedin logo

Title – Product Manager Experience: 4 to 8 years Skills required: Business Analysis, Managed products developed using AWS services like S3, API Gateway, Lambda, DynamoDB, Redshift, Stakeholder Management, Backlog Prioritization Job Description The Area: Data Lake is a smart object store on AWS that allows storage and access. The files in Data Lake are stored in raw or unstructured format, as compared to a structured DB, and are accessible to run a variety of analytics as needed on the available data. As a roadmap, Data Lake would be used across Morningstar to store and access all their structured and unstructured data across the teams to make it a single source of information. The Role: We are looking for an experienced, enthusiastic, results-driven individual to help advance our offerings to Morningstar's internal users. The ideal candidate will deeply understand the financial markets and financial data. The candidate should have worked extensively on developing new products and digital propositions from concept through to launch. This business visionary will work with internal partners in Product, Research, and Investment Management to drive innovation in our product offerings. This position is based in our Mumbai office. Responsibilities: Work within an Agile software development framework, develop business requirements and user stories refined and validated with customers and stakeholders, prioritize the backlog queue across multiple projects and workstreams and ensure high-quality execution working with development and business analyst squad members. Work with external and internal project stakeholders to define and document project scope, plan product phases/ versions, Minimum Viable Product, and overall product deliveries Work with other product and capability owners from across the organization to develop a product integration vision that supports and advances their business goals Work with cross-functional leaders to determine technology, design, and project management resource requirements to execute and deliver on commitments. Proactively communicate project delivery risks to key stakeholders to ensure timely deliverables Own the tactical roadmap, requirements, and product development lifecycle for a squad to deliver high-performing Enterprise Components to our end clients Understand business, operations, and technology requirements, serving as a conduit between stakeholders, operations, and technology teams Defines and tracks key performance indicators (KPIs) and measurements of product success. Requirements: Candidates must have a minimum of a bachelor`s degree with excellent academic credentials. MBA highly desired. At least five years of business experience in the financial services industry. Candidates must have domain expertise, particularly in developing products using the AWS platform. Superior business judgment; analytical, planning, and decision-making skills; in addition, to exemplary communication and presentation abilities. An action-oriented individual possessing an entrepreneurial mindset. Demonstrated ability to lead and build the capabilities of a driven and diverse team. Able to thrive in a fast-paced work environment, exhibit a passion for innovation, and harbor a genuine belief in, and acceptance of Morningstar`s core values. Ability to develop strong internal and external partnerships; and work effectively across different business and functional areas. AWS Certification is a big plus Morningstar is an equal-opportunity employer. Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity Show more Show less

Posted 6 days ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies