Home
Jobs

1759 Redshift Jobs - Page 13

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position: Data Architect Roles And Responsibilities 10+ years of relevant work experience, including previous experience leading Data related projects in the field of Reporting and Analytics. Design, build & maintain scalable data lake and data warehouse in cloud (GCP) Expertise in gathering business requirements, analysing business needs, defining the BI/DW architecture to support and help deliver technical solutions to complex business and technical requirements Creating solution prototype and participating in technology selection. Perform POC and technical presentations Architect, develop and test scalable data warehouses and data pipelines architecture in Cloud Technologies (GCP) Experience in SQL and No SQL DBMS like MS SQL Server, MySQL, PostgreSQL, DynamoDB,Cassandra, MongoDB. Design and develop scalable ETL processes, including error handling. Expert in Query and program languages MS SQL Server, T-SQL, PostgreSQL, MY SQL, Python, R. Preparing data structures for advanced analytics and self-service reporting using MS SQL, SSIS, SSRS Write scripts for stored procedures, database snapshots backups and data archiving. Experience with any of these cloud-based technologies: PowerBI/Tableau, Azure Data Factory, Azure Synapse, Azure Data Lake AWS RedShift, Glue, Athena, AWS Quicksight Google Cloud Platform Good To Have Agile development environment pairing DevOps with CI/CD pipelines AI/ML background Skills: data warehousing,aws redshift,data architect,mongodb,gcp,analytics,data lake,agile,powerbi,etl,business intelligence,data,r,ci/cd,sql,dynamodb,azure data factory,nosql,cassandra,devops,python,tableau,t-sql Show more Show less

Posted 5 days ago

Apply

10.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Data Architect is responsible to define and lead the Data Architecture, Data Quality, Data Governance, ingesting, processing, and storing millions of rows of data per day. This hands-on role helps solve real big data problems. You will be working with our product, business, engineering stakeholders, understanding our current eco-systems, and then building consensus to designing solutions, writing codes and automation, defining standards, establishing best practices across the company and building world-class data solutions and applications that power crucial business decisions throughout the organization. We are looking for an open-minded, structured thinker passionate about building systems at scale. Role Design, implement and lead Data Architecture, Data Quality, Data Governance Defining data modeling standards and foundational best practices Develop and evangelize data quality standards and practices Establish data governance processes, procedures, policies, and guidelines to maintain the integrity and security of the data Drive the successful adoption of organizational data utilization and self-serviced data platforms Create and maintain critical data standards and metadata that allows data to be understood and leveraged as a shared asset Develop standards and write template codes for sourcing, collecting, and transforming data for streaming or batch processing data Design data schemes, object models, and flow diagrams to structure, store, process, and integrate data Provide architectural assessments, strategies, and roadmaps for data management Apply hands-on subject matter expertise in the Architecture and administration of Big Data platforms, Data Lake Technologies (AWS S3/Hive), and experience with ML and Data Science platforms Implement and manage industry best practice tools and processes such as Data Lake, Databricks, Delta Lake, S3, Spark ETL, Airflow, Hive Catalog, Redshift, Kafka, Kubernetes, Docker, CI/CD Translate big data and analytics requirements into data models that will operate at a large scale and high performance and guide the data analytics engineers on these data models Define templates and processes for the design and analysis of data models, data flows, and integration Lead and mentor Data Analytics team members in best practices, processes, and technologies in Data platforms Qualifications B.S. or M.S. in Computer Science, or equivalent degree 10+ years of hands-on experience in Data Warehouse, ETL, Data Modeling & Reporting 7+ years of hands-on experience in productionizing and deploying Big Data platforms and applications, Hands-on experience working with: Relational/SQL, distributed columnar data stores/NoSQL databases, time-series databases, Spark streaming, Kafka, Hive, Delta Parquet, Avro, and more Extensive experience in understanding a variety of complex business use cases and modeling the data in the data warehouse Highly skilled in SQL, Python, Spark, AWS S3, Hive Data Catalog, Parquet, Redshift, Airflow, and Tableau or similar tools Proven experience in building a Custom Enterprise Data Warehouse or implementing tools like Data Catalogs, Spark, Tableau, Kubernetes, and Docker Knowledge of infrastructure requirements such as Networking, Storage, and Hardware Optimization with hands-on experience in Amazon Web Services (AWS) Strong verbal and written communications skills are a must and should work effectively across internal and external organizations and virtual teams Demonstrated industry leadership in the fields of Data Warehousing, Data Science, and Big Data related technologies Strong understanding of distributed systems and container-based development using Docker and Kubernetes ecosystem Deep knowledge of data structures and algorithms Experience working in large teams using CI/CD and agile methodologies Unique ID - Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based data Exposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn) Familiarity with big data technologie s like Apache Spark, Kafka Experience with data visualization tool s (Tableau, Power BI, AWS QuickSight) Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databrew AWS Certifications (Data Analytics, Solutions Architect Show more Show less

Posted 5 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Java Developer Job Type: Contract (6 Months) Location: Pune YOE -6+ Years Role Overview We are seeking a skilled Java Developer for a 6-month contract role based in Pune. The ideal candidate will have strong hands-on experience in Java-based enterprise application development and a solid understanding of cloud technologies. Key Responsibilities Analyze customer/internal requirements and translate them into software design documents; present RFCs to the architecture team Write clean, high-quality, maintainable code based on approved designs Conduct thorough unit and system-level testing to ensure software reliability Collaborate with cross-functional teams to analyze, design, and deliver applications Ensure optimal performance, scalability, and responsiveness of applications Take technical ownership of assigned features Provide mentorship and support to team members for resolving technical and functional issues Review and approve peer code through pull requests Must-Have Skills Frameworks/Technologies: Spring Boot, Spring AOP, Spring MVC, Hibernate, Play, REST APIs, Microservices Programming Languages: Core Java, Java 8 (streams, lambdas, fluent-style programming), J2EE Database: Strong SQL skills with the ability to write complex queries DevOps: Hands-on experience with CI/CD pipelines Cloud: Solid understanding of AWS services such as S3, Lambda, SNS, SQS, IAM Roles, Kinesis, EMR, Databricks Coding Practices: Scalable and maintainable code development; experience in cloud-native application development Nice-to-Have Skills Additional Languages/Frameworks: Golang, React, OAuth, SCIM Databases: NoSQL, Redshift AWS Tools: KMS, CloudWatch, Caching, Notification Services, Queues Candidate Requirements Proven experience in core application development Strong communication and interpersonal skills Proactive attitude with a willingness to learn new technologies and products Collaborative team player with a growth mindset Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

Job Title: BI Engineer – Amazon QuickSight Developer Job Summary We are seeking an experienced Amazon QuickSight Developer to join our BI team. This role requires deep expertise in designing and deploying intuitive, high-impact dashboards and managing all aspects of QuickSight administration. You’ll collaborate closely with data engineers and business stakeholders to create scalable BI solutions that empower data-driven decisions across the organization. Key Responsibilities Dashboard Development & Visualization Design, develop, and maintain interactive QuickSight dashboards using advanced visuals, parameters, and controls. Create reusable datasets and calculated fields using both SPICE and Direct Query modes. Implement advanced analytics such as level-aware calculations, ranking, period-over-period comparisons, and custom KPIs. Build dynamic, user-driven dashboards with multi-select filters, dropdowns, and custom date ranges. Optimize performance and usability to maximize business value and user engagement. QuickSight Administration Manage users, groups, and permissions through QuickSight and AWS IAM roles. Implement and maintain row-level security (RLS) to ensure appropriate data access. Monitor usage, SPICE capacity, and subscription resources to maintain system performance. Configure and maintain themes, namespaces, and user interfaces for consistent experiences. Work with IT/cloud teams on account-level settings and AWS integrations. Collaboration & Data Integration Partner with data engineers and analysts to understand data structures and business needs. Integrate QuickSight with AWS services such as Redshift, Athena, S3, and Glue. Ensure data quality and accuracy through robust data modeling and SQL optimization. Required Skills & Qualifications 3+ years of hands-on experience with Amazon QuickSight (development and administration). Strong SQL skills and experience working with large, complex datasets. Expert-level understanding of QuickSight security, RLS, SPICE management, and user/group administration. Strong sense of data visualization best practices and UX design principles. Proficiency with AWS data services including Redshift, Athena, S3, Glue, and IAM. Solid understanding of data modeling and business reporting frameworks. Nice to Have: Experience with Python, AWS Lambda, or automating QuickSight administration via SDK or CLI. Familiarity with modern data stack tools (e.g., dbt, Snowflake, Tableau, Power BI). Apply Now If you’re passionate about building scalable BI solutions and making data come alive through visualization, we’d love to hear from you! Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Coders Brain is a global leader in IT services, digital and business solutions that partners with clients to simplify, strengthen, and transform their businesses. The company ensures high levels of certainty and satisfaction through deep industry expertise and a global network of innovation and delivery centers. Job Title: Senior Data Engineer Location: Hyderabad Experience: 6+ Years Employment Type: Full-Time Job Summary: We are looking for a highly skilled Senior Data Engineer to join our Data Engineering team. You will play a key role in designing, implementing, and optimizing robust, scalable data solutions that drive business decisions for our clients. This position involves hands-on development of data pipelines, cloud data platforms, and analytics tools using cutting-edge technologies. Key Responsibilities: Design and build reliable, scalable, and high-performance data pipelines to ingest, transform, and store data from various sources. Develop cloud-based data infrastructure using platforms such as AWS , Azure , or Google Cloud Platform (GCP) . Optimize data processing and storage frameworks for cost efficiency and performance. Ensure high standards for data quality, integrity, and governance across all systems. Collaborate with cross-functional teams including data scientists, analysts, and product managers to translate requirements into technical solutions. Troubleshoot and resolve issues with data pipelines and workflows, ensuring system reliability and availability. Stay current with emerging trends and technologies in big data and cloud ecosystems and recommend improvements accordingly. Required Qualifications: Bachelor’s degree in Computer Science , Software Engineering , or a related field. Minimum 6 years of professional experience in data engineering or a related discipline. Proficiency in Python , Java , or Scala for data engineering tasks. Strong expertise in SQL and hands-on experience with modern data warehouses (e.g., Snowflake, Redshift, BigQuery). In-depth knowledge of big data technologies such as Hadoop , Spark , or Hive . Practical experience with cloud-based data platforms such as AWS (e.g., Glue, EMR) , Azure (e.g., Data Factory, Synapse) , or GCP (e.g., Dataflow, BigQuery) . Excellent analytical, problem-solving, and communication skills. Nice to Have: Experience with containerization and orchestration tools such as Docker and Kubernetes . Familiarity with CI/CD pipelines for data workflows. Knowledge of data governance, security, and compliance best practices. Show more Show less

Posted 6 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About US At Particleblack, we drive innovation through intelligent experimentation with Artificial Intelligence. Our multidisciplinary team—comprising solution architects, data scientists, engineers, product managers, and designers—collaborates with domain experts to deliver cutting-edge R&D solutions tailored to your business. Our ecosystem empowers rapid execution with plug-and-play tools, enabling scalable, AI-powered strategies that fast-track your digital transformation. With a focus on automation and seamless integration, we help you stay ahead—letting you focus on your core, while we accelerate your growth Responsibilities & Qualifications Data Architecture Design: Develop and implement scalable and efficient data architectures for batch and real-time data processing.Design and optimize data lakes, warehouses, and marts to support analytical and operational use cases. ETL/ELT Pipelines: Build and maintain robust ETL/ELT pipelines to extract, transform, and load data from diverse sources.Ensure pipelines are highly performant, secure, and resilient to handle large volumes of structured and semi-structured data. Data Quality and Governance: Establish data quality checks, monitoring systems, and governance practices to ensure the integrity, consistency, and security of data assets. Implement data cataloging and lineage tracking for enterprise-wide data transparency. Collaboration with Teams:Work closely with data scientists and analysts to provide accessible, well-structured datasets for model development and reporting. Partner with software engineering teams to integrate data pipelines into applications and services. Cloud Data Solutions: Architect and deploy cloud-based data solutions using platforms like AWS, Azure, or Google Cloud, leveraging services such as S3, BigQuery, Redshift, or Snowflake. Optimize cloud infrastructure costs while maintaining high performance. Data Automation and Workflow Orchestration: Utilize tools like Apache Airflow, n8n, or similar platforms to automate workflows and schedule recurring data jobs. Develop monitoring systems to proactively detect and resolve pipeline failures. Innovation and Leadership: Research and implement emerging data technologies and methodologies to improve team productivity and system efficiency. Mentor junior engineers, fostering a culture of excellence and innovation.| Required Skills: Experience: 7+ years of overall experience in data engineering roles, with at least 2+ years in a leadership capacity. Proven expertise in designing and deploying large-scale data systems and pipelines. Technical Skills: Proficiency in Python, Java, or Scala for data engineering tasks. Strong SQL skills for querying and optimizing large datasets. Experience with data processing frameworks like Apache Spark, Beam, or Flink. Hands-on experience with ETL tools like Apache NiFi, dbt, or Talend. Experience in pub sub and stream processing using Kafka/Kinesis or the like Cloud Platforms: Expertise in one or more cloud platforms (AWS, Azure, GCP) with a focus on data-related services. Data Modeling: Strong understanding of data modeling techniques (dimensional modeling, star/snowflake schemas). Collaboration: Proven ability to work with cross-functional teams and translate business requirements into technical solutions. Preferred Skills: Familiarity with data visualization tools like Tableau or Power BI to support reporting teams. Knowledge of MLOps pipelines and collaboration with data scientists. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Azure Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 6 days ago

Apply

0.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

Our software engineers at Fiserv bring an open and creative mindset to a global team developing mobile applications, user interfaces and much more to deliver industry-leading financial services technologies to our clients. Our talented technology team members solve challenging problems quickly and with quality. We're seeking individuals who can create frameworks, leverage developer tools, and mentor and guide other members of the team. Collaboration is key and whether you are an expert in a legacy software system or are fluent in a variety of coding languages you're sure to find an opportunity as a software engineer that will challenge you to perform exceptionally and deliver excellence for our clients. Full-time Entry, Mid, Senior Yes (occasional), Minimal (if any) Responsibilities Requisition ID R-10358702 Date posted 06/13/2025 End Date 06/30/2025 City Chennai State/Region Tamil Nadu Country India Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Specialist, Software Development Engineering JD We are looking for a seasoned Data Engineer for designing and developing Data Applications powering various Operational and Analytical use cases. This is a full-time position with career growth opportunities and a competitive benefits package. If you want to work with leading technology & Cloud and help financial institutions and businesses worldwide solve complex business challenges every day, this is the right opportunity for you. What does a successful Data Engineer do at Fiserv? You will be responsible for developing to Data solutions on Cloud along with Product enhancements and new features. As a hands-on engineer, you will be working in an Agile development model in developing and maintaining Data solutions including Data ingestion, Transformation, and reporting. What you will do: Responsible to drive Data and Analytical application development and maintenance Design and develop highly efficient Data engineering pipelines and Database systems leveraging Oracle/Snowflake, AWS, Java Demonstrate building highly efficient and performing Data applications catering to Business with higher data accuracy and faster response. Optimize performance, fixes bugs to improve Data availability and Accuracy. Create Technical Design documents. Collaborate with multiple teams to provide technical knowhow, solutions to complex business problems. Develop reusable assets, Create knowledge repository. What you will need to have: 7+ years of experience in designing and deploying Enterprise-level Data Applications Strong development/technical skills in PL/SQL, Oracle, Shell Scripting, Java, Spring Boot Experience in Cloud platforms like AWS and having experience in one of the Cloud Databases like Snowflake or Redshift Strong understanding and development skill in Data Transformations and Aggregations What would be great to have: Experience in Java, Restful APIs Experience in handling Real-time data loading using Kafka would be advantage. You stay focused - you want to ship software that solves real problems for real people, now. You’re a professional – you understand that it’s not enough to write working code. It must also be well-designed, easy to test, and easy to add to over time. You’re learning – no matter how much you know, you are always seeking to learn more and to become a better engineer and leader. Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.

Posted 6 days ago

Apply

0.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Senior Data Engineer (Contract) Location: Bengaluru, Karnataka, India About the Role: We're looking for an experienced Senior Data Engineer (6-8 years) to join our data team. You'll be key in building and maintaining our data systems on AWS. You'll use your strong skills in big data tools and cloud technology to help our analytics team get valuable insights from our data. You'll be in charge of the whole process of our data pipelines, making sure the data is good, reliable, and fast. What You'll Do: Design and build efficient data pipelines using Spark / PySpark / Scala . Manage complex data processes with Airflow , creating and fixing any issues with the workflows ( DAGs ). Clean, transform, and prepare data for analysis. Use Python for data tasks, automation, and building tools. Work with AWS services like S3, Redshift, EMR, Glue, and Athena to manage our data infrastructure. Collaborate closely with the Analytics team to understand what data they need and provide solutions. Help develop and maintain our Node.js backend, using Typescript , for data services. Use YAML to manage the settings for our data tools. Set up and manage automated deployment processes ( CI/CD ) using GitHub Actions . Monitor and fix problems in our data pipelines to keep them running smoothly. Implement checks to ensure our data is accurate and consistent. Help design and build data warehouses and data lakes. Use SQL extensively to query and work with data in different systems. Work with streaming data using technologies like Kafka for real-time data processing. Stay updated on the latest data engineering technologies. Guide and mentor junior data engineers. Help create data management rules and procedures. What You'll Need: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 6-8 years of experience as a Data Engineer. Strong skills in Spark and Scala for handling large amounts of data. Good experience with Airflow for managing data workflows and understanding DAGs . Solid understanding of how to transform and prepare data. Strong programming skills in Python for data tasks and automation.. Proven experience working with AWS cloud services (S3, Redshift, EMR, Glue, IAM, EC2, and Athena ). Experience building data solutions for Analytics teams. Familiarity with Node.js for backend development. Experience with Typescript for backend development is a plus. Experience using YAML for configuration management. Hands-on experience with GitHub Actions for automated deployment ( CI/CD ). Good understanding of data warehousing concepts. Strong database skills - OLAP/OLTP Excellent command of SQL for data querying and manipulation. Experience with stream processing using Kafka or similar technologies. Excellent problem-solving, analytical, and communication skills. Ability to work well independently and as part of a team. Bonus Points: Familiarity with data lake technologies (e.g., Delta Lake, Apache Iceberg). Experience with other stream processing technologies (e.g., Flink, Kinesis). Knowledge of data management, data quality, statistics and data governance frameworks. Experience with tools for managing infrastructure as code (e.g., Terraform). Familiarity with container technologies (e.g., Docker, Kubernetes). Experience with monitoring and logging tools (e.g., Prometheus, Grafana).

Posted 6 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realize your potential amongst cutting edge leaders, and organizations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Your work profile: As a Senior Consultant/Manager in our Technology & Transformation you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations: - Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills. Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions. Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution. Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Location and way of working: Base location: Bengaluru This profile involves occasional travelling to client locations. Hybrid is our default way of working. Each domain has customized the hybrid approach to their unique needs. Your role as a Senior Consultant/Manager: We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society. In addition to living our purpose, Senior Consultant/Manager across our organization must strive to be: Inspiring - Leading with integrity to build inclusion and motivation Committed to creating purpose - Creating a sense of vision and purpose Agile - Achieving high-quality results through collaboration and Team unity Skilled at building diverse capability - Developing diverse capabilities for the future Persuasive / Influencing - Persuading and influencing stakeholders Collaborating - Partnering to build new solutions Delivering value - Showing commercial acumen Committed to expanding business - Leveraging new business opportunities Analytical Acumen - Leveraging data to recommend impactful approach and solutions through the power of analysis and visualization Effective communication – Must be well abled to have well-structured and well-articulated conversations to achieve win-win possibilities Engagement Management / Delivery Excellence - Effectively managing engagement(s) to ensure timely and proactive execution as well as course correction for the success of engagement(s) Managing change - Responding to changing environment with resilience Managing Quality & Risk - Delivering high quality results and mitigating risks with utmost integrity and precision Strategic Thinking & Problem Solving - Applying strategic mindset to solve business issues and complex problems Tech Savvy - Leveraging ethical technology practices to deliver high impact for clients and for Deloitte Empathetic leadership and inclusivity - creating a safe and thriving environment where everyone's valued for who they are, use empathy to understand others to adapt our behaviors' and attitudes to become more inclusive. How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterized by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognize there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organization and the business area you’re applying to. Check out recruiting tips from Deloitte professionals. Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Location: PAN India Duration: 6 Months Experience Required: 7–8 years Job Summary We are looking for an experienced SSAS Developer with strong expertise in developing both OLAP and Tabular models using SQL Server Analysis Services (SSAS), alongside advanced ETL development skills using tools like SSIS, Informatica, or Azure Data Factory . The ideal candidate will be well-versed in T-SQL , dimensional modeling, and building high-performance, scalable data solutions. Key Responsibilities Design, build, and maintain SSAS OLAP cubes and Tabular models Create complex DAX and MDX queries for analytical use cases Develop robust ETL workflows and pipelines using SSIS, Informatica, or ADF Collaborate with cross-functional teams to translate business requirements into BI solutions Optimize SSAS models for scalability and performance Implement best practices in data modeling, version control, and deployment automation Support dashboarding and reporting needs via Power BI, Excel, or Tableau Maintain and troubleshoot data quality, performance, and integration issues Must-Have Skills Hands-on experience with SSAS (Tabular & Multidimensional) Proficient in DAX, MDX, and T-SQL Advanced ETL skills using SSIS / Informatica / Azure Data Factory Knowledge of dimensional modeling (star & snowflake schema) Experience with Azure SQL / MS SQL Server Familiarity with Git and CI/CD pipelines Nice to Have Exposure to cloud data platforms (Azure Synapse, Snowflake, AWS Redshift) Working knowledge of Power BI or similar BI tools Understanding of Agile/Scrum methodology Bachelor's degree in Computer Science, Information Systems, or equivalent Show more Show less

Posted 6 days ago

Apply

4.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly skilled and hands-on Data Engineer to join Controls Technology to support the design, development, and implementation of our next-generation Data Mesh and Hybrid Cloud architecture. This role is critical in building scalable, resilient, and future-proof data pipelines and infrastructure that enable the seamless integration of Controls Technology data within a unified platform. The Data Engineer will work closely with the Data Mesh and Cloud Architect Lead to implement data products, ETL/ELT pipelines, hybrid cloud integrations, and governance frameworks that support data-driven decision-making across the enterprise. Key Responsibilities: Data Pipeline Development: Design, build, and optimize ETL/ELT pipelines for structured and unstructured data. Develop real-time and batch data ingestion pipelines using distributed data processing frameworks. Ensure pipelines are highly performant, cost-efficient, and secure. Apache Iceberg & Starburst Integration: Work extensively with Apache Iceberg for data lake storage optimization and schema evolution. Manage Iceberg Catalogs and ensure seamless integration with query engines. Configure and maintain Hive MetaStore (HMS) for Iceberg-backed tables and ensure proper metadata management. Utilize Starburst and Stargate to enable distributed SQL-based analytics and seamless data federation. Optimize performance tuning for large-scale querying and federated access to structured and semi-structured data. Data Mesh Implementation: Implement Data Mesh principles by developing domain-specific data products that are discoverable, interoperable, and governed. Collaborate with data domain owners to enable self-service data access while ensuring consistency and quality. Hybrid Cloud Data Integration: Develop and manage data storage, processing, and retrieval solutions across AWS and on-premise environments. Work with cloud-native tools such as AWS S3, RDS, Lambda, Glue, Redshift, and Athena to support scalable data architectures. Ensure hybrid cloud data flows are optimized, secure, and compliant with organizational standards. Data Governance & Security: Implement data governance, lineage tracking, and metadata management solutions. Enforce security best practices for data encryption, role-based access control (RBAC), and compliance with policies such as GDPR and CCPA. Performance Optimization & Monitoring: Monitor and optimize data workflows, performance tuning of queries, and resource utilization. Implement logging, alerting, and monitoring solutions using CloudWatch, Prometheus, or Grafana to ensure system health. Collaboration & Documentation: Work closely with data architects, application teams, and business units to ensure seamless integration of data solutions. Maintain clear documentation of data models, transformations, and architecture for internal reference and governance. Required Technical Skills: Programming & Scripting: Strong proficiency in Python, SQL, and Shell scripting. Experience with Scala or Java is a plus. Data Processing & Storage: Hands-on experience with Apache Spark, Kafka, Flink, or similar distributed processing frameworks. Strong knowledge of relational (PostgreSQL, MySQL, Oracle) and NoSQL databases (DynamoDB, MongoDB). Expertise in Apache Iceberg for managing large-scale data lakes, schema evolution, and ACID transactions. Experience working with Iceberg Catalogs, Hive MetaStore (HMS), and integrating Iceberg-backed tables with query engines. Familiarity with Starburst and Stargate for federated querying and cross-platform data access. Cloud & Hybrid Architecture: Experience working with AWS data services (S3, Redshift, Glue, Athena, EMR, RDS). Understanding of hybrid data storage and integration between on-prem and cloud environments. Infrastructure as Code (IaC) & DevOps: Experience with Terraform, AWS CloudFormation, or Kubernetes for provisioning infrastructure. CI/CD pipeline experience using GitHub Actions, Jenkins, or GitLab CI/CD. Data Governance & Security: Familiarity with data cataloging, lineage tracking, and metadata management. Understanding of RBAC, IAM roles, encryption, and compliance frameworks (GDPR, SOC2, etc.). Required Soft Skills: Problem-Solving & Analytical Thinking - Ability to troubleshoot complex data issues and optimize workflows. Collaboration & Communication - Comfortable working with cross-functional teams and articulating technical concepts to non-technical stakeholders. Ownership & Proactiveness - Self-driven, detail-oriented, and able to take ownership of tasks with minimal supervision. Continuous Learning - Eager to explore new technologies, improve skill sets, and stay ahead of industry trends. Qualifications: 4-6 years of experience in data engineering, cloud infrastructure, or distributed data processing. Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Technology, or a related field. Hands-on experience with data pipelines, cloud services, and large-scale data platforms. Strong foundation in SQL, Python, Apache Iceberg, Starburst, and cloud-based data solutions (AWS preferred), Apache Airflow Orchestration ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Data Architecture ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Analytics Engineer We are seeking a talented, motivated and self-driven professional to join the HH Digital, Data & Analytics (HHDDA) organization and play an active role in Human Health transformation journey to become the premier “Data First” commercial biopharma organization. As a Analytics Engineer, you will be part of the HHDDA Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your specific responsibilities will include Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required Experience 5+ years of relevant work experience in the pharmaceutical/life sciences industry, with demonstrated hands-on experience in analyzing, modeling and extracting insights from commercial/marketing analytics datasets (specifically, real-world datasets) High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Experience creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Experience with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Experience with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Experience in analytics use cases of pharmaceutical products and vaccines Experience in market analytics and related use cases Preferred Experience Experience in analytics use cases focused on informing marketing strategies and commercial execution of pharmaceutical products and vaccines Experience with Agile ways of working, leading or working as part of scrum teams Certifications in AWS and/or modern data technologies Knowledge of the commercial/marketing analytics data landscape and key data sources/vendors Experience in building data models for data science and visualization/reporting products, in collaboration with data scientists, report developers and business stakeholders Experience with data visualization technologies (e.g, PowerBI) Our Human Health Division maintains a “patient first, profits later” ideology. The organization is comprised of sales, marketing, market access, digital analytics and commercial professionals who are passionate about their role in bringing our medicines to our customers worldwide. We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 07/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R335382 Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

Remote

Linkedin logo

Primary Skills Data Engineer , JD Responsible to lead the team and at the same time canidate should be able to get his/her hands dirty by writing code or contributing to any of the development lifecyle.AWS ,RedshiftEMR Cloud ETL Tools S3 We are looking for a Senior Consultant with at least 5 years of experience to join our team. The ideal candidate should have strong leadership skills and be able to lead a team effectively. At the same time, the candidate should also be willing to get their hands dirty by writing code and contributing to the development lifecycle. The primary skills required for this role include expertise in AWS Redshift and AWS Native Services, as well as experience with EMR, cloud ETL tools, and S3. The candidate should have a strong understanding of these tools and be able to utilize them effectively to meet project goals. As a Senior Consultant, the candidate will be responsible for providing guidance and support to the team, as well as ensuring the successful completion of projects. This is a hybrid work mode position, requiring the candidate to work both remotely and in the office as needed. Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Job Title: Senior Business Analyst Experience: 6+ Years Location: Trivandrum / Kochi / Bangalore Work Mode: Hybrid (3 days from office) Work Timings: 5 PM to 2 AM IST (to align with US business teams) Job Description We are looking for a highly skilled Senior Business Analyst with a strong background in requirement gathering, stakeholder management, and technical collaboration. The candidate must be comfortable working closely with US-based clients and managing end-to-end business analysis and solution design. Roles & Responsibilities Elicit, analyze, and document clear, concise, and detailed business requirements through interaction with business stakeholders. Convert project scope into detailed user stories, wireframes, prototypes, and process flows to support requirement clarification and approvals. Collaborate with IT development teams to ensure proper understanding and implementation of requirements. Coordinate with Quality Assurance teams to define and execute project testing strategies and plans. Ensure business requirements traceability to technical requirements and validate system designs. Lead User Acceptance Testing (UAT) efforts and ensure solutions meet business expectations. Engage with clients and vendors to manage development, communication, and execution activities effectively. Analyze data using SQL and extract insights to define or support business rules and technical solutions. Document all business rules and technical mappings using tools like JIRA and Confluence. Mandatory Skills Minimum 6 years of IT experience with 4+ years in business analysis and requirement gathering. Minimum 5 years of experience working directly with development teams. Strong experience in writing SQL queries and data analysis. Expertise in using JIRA, Confluence, and Agile methodologies. Strong communication, consulting, and interpersonal skills with senior stakeholder engagement. Proven experience in managing client/vendor relationships and leading cross-functional teams. Experience in leading UAT planning and execution. Comfortable working in a US time zone (5 PM – 2 AM IST). Good To Have Skills Exposure to cloud database technologies like Snowflake, Redshift, Oracle. Working knowledge of AWS or similar cloud platforms. Experience applying Design Thinking principles. Ability to develop and validate hypotheses to support business insights. Experience in delivering under tight deadlines in hybrid working models. Skills Business Analysis,Requirement Gathering,Sql Show more Show less

Posted 6 days ago

Apply

3.0 - 7.0 years

6 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities: Design, develop, and implement scalable data strategies aligned with business goals, including data modelling, migration and warehousing. Integrate and correlate data from different business platforms/departments. Analysing, planning, and defining data architecture framework for all the data warehouse components, ensuring a flexible architecture to support the companys current and future growth. Data Ingestion Data Processing, Integration, Computation and Transformation Data Storage (Data Lake and Data Warehouse) Data Delivery – Pipeline and Orchestration Data Consumption – BI, Analytics and Advanced Analytics (Data Science) Data Management – MDM, Data catalog and Data Governance, Enterprise Data Management (availability, quality, and access) Data Security, Access management and Data Privacy Continually monitor, refine, and report data management system performance. Collaborate with Cross-functional teams to integrate and streamline data across platforms. Must Have: Master’s\Bachelor’s Degree in Computer Science, Data Science, Information Technology, or related fields. 3-7 years of experience with programming languages such as PL/SQL , Python, and working with databases like Redshift, Oracle RAC, SQL Server and MySQL. Hands-on experience with Snowflake and strong knowledge of data warehousing and architecture. Good understanding of data and schema standards and concepts, database design, implementation, troubleshooting and maintenance. Good understanding of data modelling, RDBMS , cloud computing, data framework\architecture. Experience with data lakes and modern data warehousing technologies. Good to have relevant industry experience in an NBFC or similar financial institutions.

Posted 6 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About the Job The Director Data Engineering will lead the development and implementation of a comprehensive data strategy that aligns with the organization’s business goals and enables data driven decision making. Roles and Responsibilities Build and manage a team of talented data managers and engineers with the ability to not only keep up with, but also pioneer, in this space Collaborate with and influence leadership to directly impact company strategy and direction Develop new techniques and data pipelines that will enable various insights for internal and external customers Develop deep partnerships with client implementation teams, engineering and product teams to deliver on major cross-functional measurements and testing Communicate effectively to all levels of the organization, including executives Provide success in partnering teams with dramatically varying backgrounds, from the highly technical to the highly creative Design a data engineering roadmap and execute the vision behind it Hire, lead, and mentor a world-class data team Partner with other business areas to co-author and co-drive strategies on our shared roadmap Oversee the movement of large amounts of data into our data lake Establish a customer-centric approach and synthesize customer needs Own end-to-end pipelines and destinations for the transfer and storage of all data Manage 3rd-party resources and critical data integration vendors Promote a culture that drives autonomy, responsibility, perfection and mastery. Maintain and optimize software and cloud expenses to meet financial goals of the company Provide technical leadership to the team in design and architecture of data products and drive change across process, practices, and technology within the organization Work with engineering managers and functional leads to set direction and ambitious goals for the Engineering department Ensure data quality, security, and accessibility across the organization Skills You Will Need 10+ years of experience in data engineering 5+ years of experience leading data teams of 30+ resources or more, including selection of talent planning / allocating resources across multiple geographies and functions. 5+ years of experience with GCP tools and technologies, specifically, Google BigQuery, Google cloud composer, Dataflow, Dataform, etc. Experience creating large-scale data engineering pipelines, data-based decision-making and quantitative analysis tools and software Experience with hands-on to code version control systems (git) Experience with CICD, data architectures, pipelines, quality, and code management Experience with complex, high volume, multi-dimensional data, based on unstructured, structured, and streaming datasets Experience with SQL and NoSQL databases Experience creating, testing, and supporting production software and systems Proven track record of identifying and resolving performance bottlenecks for production systems Experience designing and developing data lake, data warehouse, ETL and task orchestrating systems Strong leadership, communication, time management and interpersonal skills Proven architectural skills in data engineering Experience leading teams developing production-grade data pipelines on large datasets Experience designing a large data lake and lake house experience, managing data flows that integrate information from various sources into a common pool implementing data pipelines based on the ETL model Experience with common data languages (e.g. Python, Scala) and data warehouses (e.g. Redshift, BigQuery, Snowflake, Databricks) Extensive experience on cloud tools and technologies - GCP preferred Experience managing real-time data pipelines Successful track record and demonstrated thought-leadership and cross-functional influence and partnership within an agile / water-fall development environment. Experience in regulated industries or with compliance frameworks (e.g., SOC 2, ISO 27001). Nice to have: HR services industry experience Experience in data science, including predictive modeling Experience leading teams across multiple geographies Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

_VOIS Intro About _VOIS: _VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, _VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. _VOIS Centre Intro About _VOIS India: In 2009, _VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, _VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Role Related Content (Role specific) Ey Responsibilities Design and Build Data Pipelines:** Develop scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Create efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Build and manage applications using Python, SQL, Databricks, and various AWS technologies. Utilize QuickSight to create insightful data visualizations and dashboards. Quickly develop innovative Proof-of-Concept (POC) solutions to address emerging needs. Provide support and manage the ongoing operation of data services. Automate repetitive tasks and build reusable frameworks to improve efficiency. Work with teams to design and develop data products that support marketing and other business functions. Ensure data services are reliable, maintainable, and seamlessly integrated with existing systems. Required Skills And Experience Bachelor’s degree in Computer Science, Engineering, or a related field. Technical Skills: Proficiency in Python with Pandans, PySpark. Hands-on experience with AWS services including S3, Glue Lambda, API Gateway, and SQS. Knowledge of data processing tools like Spark, Hive, Kafka, and Airflow. Experience with batch job scheduling and managing data dependencies. Experience with QuickSight or similar tools. Familiarity with DevOps automation tools like GitLab, Bitbucket, Jenkins, and Maven. Understanding of Delta is would be an added advantage. _VOIS Equal Opportunity Employer Commitment India _VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion , Top 10 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch! Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Analytics Engineer II - Hyderabad, India . About Warner Bros. Discovery Warner Bros. Discovery, a premier global media and entertainment company, offers audiences the world's most differentiated and complete portfolio of content, brands and franchises across television, film, streaming and gaming. The new company combines Warner Media’s premium entertainment, sports and news assets with Discovery's leading non-fiction and international entertainment and sports businesses. For more information, please visit www.wbd.com. Roles & Responsibilities As a Analytics Engineer II, you will perform data analytics and data visualization-related efforts for the Data & Analytics organization at WBD. You’re an engineer who not only understands how to use big data in answering complex business questions but also how to design semantic layers to best support self-service vehicles. You will manage projects from requirements gathering to planning to implementation of full-stack data solutions (pipelines to data tables to visualizations) with the support of the larger team. You will work closely with cross-functional partners to ensure that business logic is properly represented in the semantic layer and production environments, where it can be used by the wider Data & Analytics team to drive business insights and strategy. Design and implement data models that support flexible querying and data visualization Partner with stakeholders to understand business questions and build out advanced analytical solutions Advance automation efforts that help the team spend less time manipulating & validating data and more time analyzing Build frameworks that multiply the productivity of the team and are intuitive for other data teams to leverage Participate in the creation and support of analytics development standards and best practices Create systematic solutions for solving data anomalies: identifying, alerting, and root cause analysis Work proactively with stakeholders to understand the business need and build data analytics capabilities – especially in large enterprise use cases Identify and explore new opportunities through creative analytical and engineering methods What To Bring Bachelor's degree, MS or greater in a quantitative field of study (Computer/Data Science, Engineering, Mathematics, Statistics, etc.) 3+ years of relevant experience in business intelligence/data engineering Expertise in writing SQL (clean, fast code is a must) and in data-warehousing concepts such as star schemas, slowly changing dimensions, ELT/ETL, and MPP databases Experience in transforming flawed/changing data into consistent, trustworthy datasets Experience with general-purpose programming (e.g. Python, Scala or other) dealing with a variety of data structures, algorithms, and serialization formats will be a plus Experience with big-data technologies (e.g. Spark, Hadoop, Snowflake etc) Advanced ability to build reports and dashboards with BI tools (such as Looker, Tableau or PowerBI) Experience with analytics tools such as Athena, Redshift, BigQuery or Snowflake Proficiency with Git (or similar version control) and CI/CD best practices will be a plus Ability to write clear, concise documentation and to communicate generally with a high degree of precision. Ability to solve ambiguous problems independently Ability to manage multiple projects and time constraints simultaneously Care for the quality of the input data and how the processed data is ultimately interpreted and used Prior experience in large enterprise use cases such as Sales Analytics, Financial Analytics, or Marketing Analytics Strong written and verbal communication skills Characteristics & Traits Naturally inquisitive, critical thinker, proactive problem-solver, and detail-oriented Positive attitude and an open mind Strong organizational skills with the ability to act independently and responsibly Self-starter, comfortable initiating projects from design to execution with minimal supervision Ability to manage and balance multiple (and sometimes competing) priorities in a fast-paced, complex business environment and can manage time effectively to consistently meet deadlines Team player and relationship builder What We Offer A Great Place to work. Equal opportunity employer Fast track growth opportunities How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com. Job Description We are looking for a skilled Test Engineer with experience in automated testing, rollback testing, and continuous integration environments. You will be responsible for ensuring the quality and reliability of our software products through automated testing strategies and robust test frameworks Design and execute end-to-end test strategies for data pipelines, ETL/ELT jobs, and database systems. Validate data quality, completeness, transformation logic, and integrity across distributed data systems (e.g., Hadoop, Spark, Hive). Develop Python-based automated test scripts to validate data flows, schema validations, and business rules. Perform complex SQL queries to verify large datasets across staging and production environments. Identify data issues and work closely with data engineers to resolve discrepancies. Contribute to test data management, environment setup, and regression testing processes. Work collaboratively with data engineers, business analysts, and QA leads to ensure accurate and timely data delivery. Participate in sprint planning, reviews, and defect triaging as part of the Agile process. Qualifications 4+ of experience in Data Testing, Big Data Testing, and/or Database Testing. Strong programming skills in Python for automation and scripting. Expertise in SQL for writing complex queries and validating large datasets. Experience with Big Data technologies such as Hadoop, Hive, Spark, HDFS, Kafka (any combination is acceptable). Hands-on experience with ETL/ELT testing and validating data transformations and pipelines. Exposure to cloud data platforms like AWS (Glue, S3, Redshift), Azure (Data Lake, Synapse), or GCP is a plus. Familiarity with test management and defect tracking tools like JIRA, TestRail, or Zephyr. Experience with CI/CD pipelines and version control (e.g., Git) is an advantage. Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description Amazon Music is awash in data! To help make sense of it all, the DISCO (Data, Insights, Science & Optimization) team: (i) enables the Consumer Product Tech org make data driven decisions that improve the customer retention, engagement and experience on Amazon Music. We build and maintain automated self-service data solutions, data science models and deep dive difficult questions that provide actionable insights. We also enable measurement, personalization and experimentation by operating key data programs ranging from attribution pipelines, northstar weblabs metrics to causal frameworks. (ii) delivering exceptional Analytics & Science infrastructure for DISCO teams, fostering a data-driven approach to insights and decision making. As platform builders, we are committed to constructing flexible, reliable, and scalable solutions to empower our customers. (iii) accelerates and facilitates content analytics and provides independence to generate valuable insights in a fast, agile, and accurate way. This domain provides analytical support for the below topics within Amazon Music: Programming / Label Relations / PR / Stations / Livesports / Originals / Case & CAM. DISCO team enables repeatable, easy, in depth analysis of music customer behaviors. We reduce the cost in time and effort of analysis, data set building, model building, and user segmentation. Our goal is to empower all teams at Amazon Music to make data driven decisions and effectively measure their results by providing high quality, high availability data, and democratized data access through self-service tools. If you love the challenges that come with big data then this role is for you. We collect billions of events a day, manage petabyte scale data on Redshift and S3, and develop data pipelines using Spark/Scala EMR, SQL based ETL, Airflow and Java services. We are looking for talented, enthusiastic, and detail-oriented Data Engineer, who knows how to take on big data challenges in an agile way. Duties include big data design and analysis, data modeling, and development, deployment, and operations of big data pipelines. You'll help build Amazon Music's most important data pipelines and data sets, and expand self-service data knowledge and capabilities through an Amazon Music data university. DISCO team develops data specifically for a set of key business domains like personalization and marketing and provides and protects a robust self-service core data experience for all internal customers. We deal in AWS technologies like Redshift, S3, EMR, EC2, DynamoDB, Kinesis Firehose, and Lambda. Your team will manage the data exchange store (Data Lake) and EMR/Spark processing layer using Airflow as orchestrator. You'll build our data university and partner with Product, Marketing, BI, and ML teams to build new behavioural events, pipelines, datasets, models, and reporting to support their initiatives. You'll also continue to develop big data pipelines. Key job responsibilities Deep understanding of data, analytical techniques, and how to connect insights to the business, and you have practical experience in insisting on highest standards on operations in ETL and big data pipelines. With our Amazon Music Unlimited and Prime Music services, and our top music provider spot on the Alexa platform, providing high quality, high availability data to our internal customers is critical to our customer experiences. Assist the DISCO team with management of our existing environment that consists of Redshift and SQL based pipelines. The activities around these systems will be well defined via standard operation procedures (SOP) and typically involve approving data access requests, subscribing or adding new data to the environment SQL data pipeline management (creating or updating existing pipelines) Perform maintenance tasks on the Redshift cluster. Assist the team with the management of our next-generation AWS infrastructure. Tasks includes infrastructure monitoring via CloudWatch alarms, infrastructure maintenance through code changes or enhancements, and troubleshooting/root cause analysis infrastructure issues that arise, and in some cases this resource may also be asked to submit code changes based on infrastructure issues that arise. About The Team Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators.From personalized music playlists to exclusive podcasts,concert livestreams to artist merch,we are innovating at some of the most exciting intersections of music and culture.We offer experiences that serve all listeners with our different tiers of service:Prime members get access to all music in shuffle mode,and top ad-free podcasts,included with their membership;customers can upgrade to Music Unlimited for unlimited on-demand access to 100 million songs including millions in HD,Ultra HD,spatial audio and anyone can listen for free by downloading Amazon Music app or via Alexa-enabled devices.Join us for opportunity to influence how Amazon Music engages fans, artists,and creators on a global scale. Basic Qualifications 2+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Experience with one or more scripting language (e.g., Python, KornShell) Experience in Unix Experience in Troubleshooting the issues related to Data and Infrastructure issues. Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Knowledge of distributed systems as it pertains to data storage and computing Experience in building or administering reporting/analytics platforms Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A2838395 Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Key job responsibilities Metric Reporting Dashboard Development Design ETL pipelines Automation Experiment Design and Support Deep Dive Analysis Insight Generation Product Improvement Opportunity Identification Opportunity or Problem Sizing Support Anecdotal Audits etc. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, PowerBI, Quicksight, or similar tools Experience performing AB Testing, applying basic statistical methods (e.g. regression) to difficult business problems Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Track record of generating key business insights and collaborating with stakeholders Preferred Qualifications Experience in designing and implementing custom reporting systems using automation tools Knowledge of data modeling and data pipeline design Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka - A66 Job ID: A2949875 Show more Show less

Posted 6 days ago

Apply

2.0 - 4.0 years

6 - 10 Lacs

Hyderabad

Hybrid

Naukri logo

Company: AstroSoft Technologies (https://www.astrosofttech.com/) Astrosoft is an award-winning company that specializes in the areas of Data, Analytics, Cloud, AI/ML, Innovation, Digital. We have a customer first mindset and take extreme ownership in delivering solutions and projects for our customers and have consistently been recognized by our clients as the premium partner to work with. We bring to bear top tier talent, a robust and structured project execution framework, our significant experience over the years and have an impeccable record in delivering solutions and projects for our clients. Founded in 2004, Headquarters in FL,USA, Corporate Office - India, Hyderabad Benefits from Astrosoft Technologies H1B Sponsorship (Depends on Project & Performance) Lunch & Dinner (Every day) Health Insurance Coverage- Group Industry Standards Leave Policy Skill Enhancement Certification Hybrid Mode Role & responsibilities Job Title: AWS Engineer Required Skills : Minimum of 2 + years direct experience with AWS Data Engineer Strong Experience in AWS Services like Redshift & ETL Glue, Spark, Python, Lambda,Kafka,S3, EMR Etc experience is a must. Monitoring tools - Cloudwatch Experience in Development & Support Projects as well. Strong verbal and written communication skills Strong experience and understanding of streaming architecture and development practices using kafka, spark, flink etc, Strong AWS development experience using S3, SNS, SQS, MWAA (Airflow) Glue, DMS and EMR. Strong knowledge of one or more programing languages Python/Java/Scala (ideally Python) Experience using Terraform to build IAC components in AWS. Strong experience with ETL Tools in AWS; ODI experience is as plus. Strong experience with Database Platforms: Oracle, AWS Redshift Strong experience in SQL tuning, tuning ETL solutions, physical optimization of databases. Very familiar with SRE concepts which includes evaluating and implementing monitoring and observability tools like Splunk, Data Dog, CloudWatch and other job, log or dashboard concepts for customer support and application health checks. Ability to collaborate with our business partners to understand and implement their requirements. Excellent interpersonal skills and be able to build consensus across teams. Strong critical thinking and ability to think out-of-the box. Self-motivated and able to perform under pressure. AWS certified (preferred) Qualifications: Education: Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Experience: Proven experience as an AWS Support Engineer or in a similar role. Hands-on experience with a wide range of AWS services (e.g., EC2, S3, RDS, Lambda, CloudFormation, IAM). Soft Skills: Excellent communication and interpersonal skills. Strong problem-solving and analytical skills. Ability to work independently and as part of a team. Customer-focused mindset with a commitment to delivering high-quality support. What We Offer: Competitive salary and benefits package. Opportunities for professional growth and development. A collaborative and supportive work environment. Access to the latest AWS technologies and training resources. If you are passionate about cloud technology and enjoy helping customers solve complex technical challenges, we would love to hear from you! Acknowledge the mail with your updated cv, Julekha.Gousiya@astrosofttech.com Details As we discussed, Please revert with your acknowledgment. Total Experience- Aws Redshift Glue- Current Location- Current Company- C-CTC- Ex-CTC – Offer – NP – Ready to Relocate Hyderabad (Y/N) – Yes (Hybrid ) –(Y/N) - Yes

Posted 6 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Surface Transportation (ST) is seeking a highly skilled and a motivated team player to be part of the dynamic ROC team which supports NA and EU Surface Transportation Operations. The Surface Transportation Operations team addresses disruptions in the Middle Mile network, supporting drivers and carriers faced with unexpected events (poor weather, road closures, unexpected surges in volume, mechanical breakdowns, etc.) to allow them to deliver packages safely and on time. As a Business Analysts you will enable effective decision making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format. You will partner with Operations, Product and Tech teams to solve business problems with focus on understanding root causes and driving forward-looking opportunities. Your ownership of the scoping and design of new metrics and enhancement of existing ones will help support the future state of business processes and ensure sustainability. You will communicate complex analysis and insights to stakeholders and business leaders, both verbally and in writing. These analytics and metrics will help ensure we are focused on what’s important, enable clarity and focus, and delight our customers. This position will support our expansion to other geographies. Key job responsibilities Work closely with Ops managers and stakeholders to understand business issues and recommend solutions to ensure business questions are properly answered Understand trends related to Amazon business and recommend strategies to stakeholders to help drive business growth. Reporting of key insight trends, using statistical rigor to simplify and inform the larger team of noteworthy story lines. Respond with urgency to high priority requests from senior business leaders. Identify, develop, manage, and execute analyses to uncover areas of opportunity and present written business recommendations that will help shape the direction of the business. Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Ensure data accuracy by validating data for new and existing tools. Learn and understand a broad range of Amazon’s data resources and know how, when, and which to use and which not to use. Engage stakeholders in constructive dialogues to convert business problems into logic problems that can be solved with data, statistics, and scripting Manage and execute entire projects or components of large projects from start to finish including project management, data gathering and manipulation, synthesis and modeling, problem solving, and communication of insights and recommendations Basic Qualifications 3+ years of business analyst, data analyst or similar role experience 5+ years of Excel (including VBA, pivot tables, array functions, power pivots, etc.) and data visualization tools such as Tableau experience 3+ years of business or financial analysis experience Bachelor's degree or equivalent Experience defining requirements and using data and metrics to draw business insights Experience with Excel Experience with SQL Experience making business recommendations and influencing stakeholders Preferred Qualifications Experience in scripting language like Python Understanding of AWS Systems like Redshift, S3 etc Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2931747 Show more Show less

Posted 6 days ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies