Home
Jobs

1759 Redshift Jobs - Page 38

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description: Staff Analyst, Business Intelligence Bloom Energy faces an unprecedented opportunity to change the world and how energy is generated and delivered. Our mission is to make clean, reliable energy affordable globally. Bloom’s Energy Server delivers highly reliable, resilient, always-on electric power that is clean, cost-effective, and ideal for microgrid applications. We are helping our customers power their operations without disruption and combustion. We seek an Staff Analyst to join our team in one of today’s most exciting technologies. This role would report to the Business Intelligence Senior Manager in Mumbai, India. As a member of the Business Intelligence team, you will design, implement, and maintain full-stack applications, develop optimization algorithms collaborating closely with stakeholders to integrate user feedback and technological advancements. Responsibilities: Develop optimization algorithms and production ready tools for Service Operations Understand service operations problems and develop full software tools for improving work efficiency and ensuring appropriate business decisions are made Support ad hoc requests for data analysis and scenarios planning from operations team Develop automated tools needed, for monitoring and maintaining critical customer performance Develop and manage well-functioning databases and applications Rapidly fix bugs, solve problems, and proactively strive to improve our products and technologies Mentor and train junior team members Requirements: Strong hands-on experience and understanding of object-oriented programming, data structures, algorithms, and web application development Proficiency with back-end languages (e.g., Python, Ruby, Java) Familiarity with databases / datalakes (e.g., PostgreSQL, Cassandra, AWS RDS, Redshift, S3) Knowledge of front-end languages (e.g., HTML, CSS, JavaScript, React, Redux, Vue, or Angular) would be a plus Experience with Git or other version control software Knowledge of distributed systems, test-driven development, SQL and NoSQL databases, performance optimization tools, and AWS services (e.g., EC2, Lambda, ECS, EKS) for app deployment Excellent problem-solving skills Education: Bachelor’s degree in Computer Science, Computer Engineering, or related fields About Bloom Energy: At Bloom Energy, we support a 100% renewable future. Our fuel-flexible technology offers one of the most resilient electricity solutions for a world facing unacceptable power disruptions. Our resilient platform has proven itself by powering through hurricanes, earthquakes, forest fires, extreme heat, and utility failures. Unlike backup generators, our fuel cells create no harmful local air pollutants. At the same time, Bloom is at the forefront of the transition to renewable fuels like hydrogen and biogas with new hydrogen power generation and electrolyzer solutions. Our customers include but are not limited to: manufacturing, data centers, healthcare, retail, low-income housing, colleges, and more! For more information, visit www.bloomenergy.com. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description: Business Analyst, Business Intelligence Bloom Energy faces an unprecedented opportunity to change the world and how energy is generated and delivered. Our mission is to make clean, reliable energy affordable globally. Bloom’s Energy Server delivers highly reliable, resilient, always-on electric power that is clean, cost-effective, and ideal for microgrid applications. We are helping our customers power their operations without disruption and combustion. We seek an Business Analyst to join our team in one of today’s most exciting technologies. This role would report to the Business Intelligence Senior Manager in Mumbai, India. Responsibilities: Develop automated tools and dashboards for various P&L line items to improve visibility and accuracy of the data Work closely with Leadership team to improve forecasting tools and provide accurate P&L forecast Work closely with finance team to monitor actuals versus forecast during the quarter Support ad hoc requests for data analysis and scenarios planning from operations team Deep dive into our costs and provide insights to the leadership team for increasing profitability Work closely with IT team to support development production ready tools for automating Services P&L Requirements: Strong analytical skills and problem solving skills Proficiency with Python, Excel and Powerpoint a must. Experience in financial planning & forecasting a plus Proficiency with dashboarding tools like Tableau etc. Familiarity with databases / datalakes (e.g., PostgreSQL, Cassandra, AWS RDS, Redshift, S3) Experience with Git or other version control software Education: Bachelor’s degree in Business Management, Data Analytics, Computer Science, Industrial Engineering or related fields About Bloom Energy: At Bloom Energy, we support a 100% renewable future. Our fuel-flexible technology offers one of the most resilient electricity solutions for a world facing unacceptable power disruptions. Our resilient platform has proven itself by powering through hurricanes, earthquakes, forest fires, extreme heat, and utility failures. Unlike backup generators, our fuel cells create no harmful local air pollutants. At the same time, Bloom is at the forefront of the transition to renewable fuels like hydrogen and biogas with new hydrogen power generation and electrolyzer solutions. Our customers include but are not limited to: manufacturing, data centers, healthcare, retail, low-income housing, colleges, and more! For more information, visit www.bloomenergy.com. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. BI Solutioning & Data Engineering Design, build, and manage end-to-end Business Intelligence solutions, integrating structured and unstructured data from internal and external sources. Architect and maintain scalable data pipelines using cloud-native services (e.g., AWS, Azure, GCP). Implement ETL/ELT processes to ensure data quality, transformation, and availability for analytics and reporting. Market Intelligence & Analytics Enablement Support the Market Intelligence team by building dashboards, visualizations, and data models that reflect competitive, market, and customer insights. Work with research analysts to convert qualitative insights into measurable datasets. Drive the automation of insight delivery, enabling real-time or near real-time updates. Visualization & Reporting Design interactive dashboards and executive-level visual reports using tools such as Power BI, or Tableau. Maintain data storytelling standards to deliver clear, compelling narratives aligned with strategic objectives. Stakeholder Collaboration Act as a key liaison between business users, strategy teams, research analysts, and IT/cloud engineering. Translate analytical and research needs into scalable, sustainable BI solutions. Educate internal stakeholders on the capabilities of BI platforms and insights delivery pipelines Preferred: Cloud Infrastructure & Data Integration Collaborate with cloud engineering teams to deploy BI tools and data lakes in a cloud environment. Ensure data warehousing architecture is aligned with market research and analytics needs. Optimize data models and storage for scalability, performance, and security. Total Experience Expected: 06-09 years Qualifications Must Bachelor’s/Master’s degree in Computer Science, Data Science, Business Analytics, or a related technical field. 6+ years of experience in Business Intelligence, Data Engineering, or Cloud Data Analytics. Proficiency in SQL, Python, or data wrangling languages. Deep knowledge of BI tools like Power BI, Tableau, or QlikView. Strong data modeling, ETL, and data governance capabilities. Preferred Solid understanding of cloud platforms (AWS, Azure, GCP), with hands-on experience in cloud-based data warehouses (e.g., Snowflake, Redshift, BigQuery) Exposure to market intelligence, competitive analysis, or strategic analytics is highly desirable. Excellent communication, stakeholder management, and visualization/storytelling skills. Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Have you ever thought about what it takes to detect and prevent fraudulent activity among hundreds of millions of e-commerce transactions across the globe? What would you do to increase trust in an online marketplace where millions of buyers and sellers transact? How would you build systems that evolve over time to proactively identify and neutralize new and emerging fraud threats? Our mission in Buyer Risk Prevention is to make Amazon the safest place to transact online. Buyer Risk Prevention safeguards every financial transaction across all Amazon sites, while striving to ensure that these efforts are transparent to our legitimate customers. As such, Buyer Risk Prevention designs and builds the software systems, risk models and operational processes that minimize risk and maximize trust in Amazon.com. As a Business Analyst in Buyer Risk Prevention, you will be responsible for analyzing terabytes of data to identify specific instances of risk, broader risk trends and points of customer friction, developing scalable solutions for prevention. You will need to collaborate effectively with business and product leaders within BRP and cross-functional teams to solve problems, create operational efficiencies, and deliver successfully against high organizational standards. You should be able to apply a breadth of tools, data sources, and analytical techniques to answer a wide range of high-impact business questions and proactively present new insights in concise and effective manner. In addition you will be responsible for building a robust set of operational and business metrics and will utilize metrics to determine improvement opportunities. You should be an effective communicator capable of independently driving issues to resolution and communicating insights to non-technical audiences. This is a high impact role with goals that directly impacts the bottom line of the business. Responsibilities Understand the various operations across Payment Risk Design and develop highly available dashboards and metrics using SQL and Excel/Tableau Perform business analysis and data queries using scripting languages like R, Python etc Understand the requirements of stakeholders and map them with the data sources/data warehouse Own the delivery and backup of periodic metrics, dashboards to the leadership team Draw inferences and conclusions, and create dashboards and visualizations of processed data, identify trends, anomalies Execute high priority (i.e. cross functional, high impact) projects to create robust, scalable analytics solutions and frameworks with the help of Analytics/BIE managers Perform business analysis and data queries using appropriate tools Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area Execute analytical projects and understanding of analytical methods (like ANOVA, Distribution theory, regression, forecasting, Machine Learning Techniques, etc.) Draw inferences and insights from the data using EDA and data manipulations using advanced SQL for business reviews Key job responsibilities Understand the various operations across Payment Risk Design and develop highly available dashboards and metrics using SQL and Excel/Tableau/QuickSight Understand the requirements of stakeholders and map them with the data sources/data warehouse Own the delivery and backup of periodic metrics, dashboards to the leadership team Draw inferences and conclusions, and create dashboards and visualizations of processed data, identify trends, anomalies Execute high priority (i.e. cross functional, high impact) projects to improve operations performance with the help of Analytics managers Perform business analysis and data queries using appropriate tools Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area Execute analytical projects and understanding of analytical methods (like ANOVA, Distribution theory, regression, forecasting, Machine Learning Techniques, etc.) Basic Qualifications Bachelor's degree in finance, accounting, business, economics, engineering , analytics, mathematics, statistics or a related technical or quantitative field 3+ years of business analyst, data analyst or similar role experience 5+ years of Excel (including VBA, pivot tables, array functions, power pivots, etc.) and data visualization tools such as Tableau experience Experience defining requirements and using data and metrics to draw business insights Experience with Excel Experience with SQL Experience making business recommendations and influencing stakeholders Experience with data visualization using Tableau, Quicksight, or similar tools Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience demonstrating problem solving and root cause analysis Experience using databases with a large-scale data set Detail-oriented and must have an aptitude for solving unstructured problems. The role will require the ability to extract data from various sources and to design/construct/execute complex analyses to finally come up with data/reports that help solve the business problem Preferred Qualifications Experience in Amazon Redshift and other AWS technologies Experience scripting for automation (e.g., Python, Perl, Ruby) Experience using Python or R for data analysis or statistical tools such as SAS Experience in e-commerce / on-line companies in fraud / risk control functions Analytical mindset and ability to see the big picture and influence others Good oral, written and presentation skills combined with the ability to be part of group discussions and explaining complex solutions Ability to apply analytical, computer, statistical and quantitative problem solving skills is required Ability to work effectively in a multi-task, high volume environment Ability to be adaptable and flexible in responding to deadlines and workflow fluctuations Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ Job ID: A2918230 Show more Show less

Posted 2 weeks ago

Apply

40.0 years

4 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-216618 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 01, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Business Intelligence Engineer Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do As a Business Intelligence Engineer, you will solve unique and complex problems at a rapid pace, utilizing the latest technologies to create solutions that are highly scalable. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable solutions and responding to requests for rapid releases of analytical outcomes. Design, develop, and maintain interactive dashboards, reports, and data visualizations using BI tools (e.g., Power BI, Tableau, Cognos, others). Analyse datasets to identify trends, patterns, and insights that inform business strategy and decision-making. Partner with leaders and stakeholders across Finance, Sales, Customer Success, Marketing, Product, and other departments to understand their data and reporting requirements. Stay abreast of the latest trends and technologies in business intelligence and data analytics, inclusive of AI use in BI. Elicit and document clear and comprehensive business requirements for BI solutions, translating business needs into technical specifications and solutions. Collaborate with Data Engineers to ensure efficient up-system transformations and create data models/views that will hydrate accurate and reliable BI reporting. Contribute to data quality and governance efforts to ensure the accuracy and consistency of BI data. What we expect of you Basic Qualifications: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: 1+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications: Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets AWS Developer certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Delhi

On-site

GlassDoor logo

Job Summary: We are looking for a skilled and motivated Data Engineer to join our growing data team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support analytics, reporting, and machine learning initiatives. You will work closely with data analysts, data scientists, and software engineers to ensure reliable access to high-quality data across the organization. Key Responsibilities: Design, develop, and maintain robust and scalable data pipelines and ETL/ELT processes. Build and optimize data architectures to support data warehousing, batch processing, and real-time data streaming. Collaborate with data scientists, analysts, and other engineers to deliver high-impact data solutions. Ensure data quality, consistency, and security across all systems. Manage and monitor data workflows to ensure high availability and performance. Develop tools and frameworks to automate data ingestion, transformation, and validation. Participate in data modeling and architecture discussions for both transactional and analytical systems. Maintain documentation of data flows, architecture, and related processes. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field. Strong programming skills in Python, Java, or Scala. Proficient in SQL and experience working with relational databases (e.g., PostgreSQL, MySQL). Experience with big data tools and frameworks (e.g., Hadoop, Spark, Kafka). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and services like S3, Redshift, BigQuery, or Azure Data Lake. Hands-on experience with data pipeline orchestration tools (e.g., Airflow, Luigi). Experience with data warehousing and data modeling best practices. Preferred Qualifications: Experience with CI/CD for data pipelines. Knowledge of containerization and orchestration tools like Docker and Kubernetes. Experience with real-time data processing technologies (e.g., Apache Flink, Kinesis). Familiarity with data governance and security practices. Exposure to machine learning pipelines is a plus.

Posted 2 weeks ago

Apply

7.0 years

15 - 20 Lacs

Chennai

On-site

GlassDoor logo

Job Title: Data Architect / Engagement Lead Location: Chennai Reports To: CEO About the Company: Ignitho Inc. is a leading AI and data engineering company with a global presence, including US, UK, India, and Costa Rica offices. Visit our website to learn more about our work and culture: www.ignitho.com. Ignitho is a portfolio company of Nuivio Ventures Inc ., a venture builder dedicated to developing Enterprise AI product companies across various domains, including AI, Data Engineering, and IoT. Learn more about Nuivio at: www.nuivio.com. Job Summary: As the Data Architect and Engagement Lead, you will define the data architecture strategy and lead client engagements , ensuring alignment between data solutions and business goals . This dual role blends technical leadership with client-facing responsibilities. Key Responsibilities: Design scalable data architectures, including storage, processing, and integration layers. Lead technical discovery and requirements gathering sessions with clients. Provide architectural oversight for data and AI solutions . Act as a liaison between technical teams and business stakeholders . Define data governance, security, and compliance standards . Required Qualifications: Bachelor’s or master’s in computer science, Information Systems, or similar. 7+ years of experience in data architecture, with client-facing experience. Deep knowledge of data modelling , cloud data platforms (Snowflake / BigQuery/ Redshift / Azure), and orchestration tools. Excellent communication, stakeholder management, and technical leadership skills. Familiarity with AI/ML systems and their data requirements is a strong plus. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Application Question(s): Do you know AI/ML? Do you Data Modeling (Snowflake/BigQuery/Redshift/Azure)? Work Location: In person

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Designation: Solution Architect Office Location: Gurgaon Position Description: As a Technical Lead, you will be responsible for leading the development and delivery of the platforms. This includes overseeing the entire product lifecycle from the solution until execution and launch, building the right team & close collaboration with business and product teams. Primary Responsibilities: Design end-to-end solutions that meet business requirements and align with the enterprise architecture. Define the architecture blueprint, including integration, data flow, application, and infrastructure components. Evaluate and select appropriate technology stacks, tools, and frameworks. Ensure proposed solutions are scalable, maintainable, and secure. Collaborate with business and technical stakeholders to gather requirements and clarify objectives. Act as a bridge between business problems and technology solutions. Guide development teams during the execution phase to ensure solutions are implemented according to design. Identify and mitigate architectural risks and issues. Ensure compliance with architecture principles, standards, policies, and best practices. Document architectures, designs, and implementation decisions clearly and thoroughly. Identify opportunities for innovation and efficiency within existing and upcoming solutions. Conduct regular performance and code reviews, and provide feedback to the development team members to improve professional development. Lead proof-of-concept initiatives to evaluate new technologies. Functional Responsibilities: Facilitate daily stand-up meetings, sprint planning, sprint review, and retrospective meetings. Work closely with the product owner to priorities the product backlog and ensure that user stories are well-defined and ready for development. Identify and address issues or conflicts that may impact project delivery or team morale. Experience with Agile project management tools such as Jira and Trello. Required Skills: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience in software engineering, with at least 3 years in a solution architecture or technical leadership role. Proficiency with AWS or GCP cloud platform. Strong implementation knowledge in JS tech stack, NodeJS, ReactJS, Experience with JS stack - ReactJS, NodeJS. Experience with Database Engines - MySQL and PostgreSQL with proven knowledge of Database migrations, high throughput and low latency use cases. Experience with key-value stores like Redis, MongoDB and similar. Preferred knowledge of distributed technologies - Kafka, Spark, Trino or similar with proven experience in event-driven data pipelines. Proven experience with setting up big data pipelines to handle high volume transactions and transformations. Experience with BI tools - Looker, PowerBI, Metabase or similar. Experience with Data warehouses like BigQuery, Redshift, or similar. Familiarity with CI/CD pipelines, containerization (Docker/Kubernetes), and IaC (Terraform/CloudFormation). Good to Have: Certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, TOGAF, etc. Experience setting up analytical pipelines using BI tools (Looker, PowerBI, Metabase or similar) and low-level Python tools like Pandas, Numpy, PyArrow Experience with data transformation tools like DBT, SQLMesh or similar. Experience with data orchestration tools like Apache Airflow, Kestra or similar. Work Environment Details: About Affle: Affle is a global technology company with a proprietary consumer intelligence platform that delivers consumer engagement, acquisitions, and transactions through relevant Mobile Advertising. The platform aims to enhance returns on marketing investment through contextual mobile ads and also by reducing digital ad fraud. While Affle's Consumer platform is used by online & offline companies for measurable mobile advertising, its Enterprise platform helps offline companies to go online through platform-based app development, enablement of O2O commerce and through its customer data platform. Affle India successfully completed its IPO in India on 08. Aug.2019 and now trades on the stock exchanges (BSE: 542752 & NSE:AFFLE). Affle Holdings is the Singapore based promoter for Affle India and its investors include Microsoft, Bennett Coleman &Company (BCCL) amongst others. For more details: www.affle.com About BU : Ultra - Access deals, coupons, and walled gardens based user acquisition on a single platform to offer bottom-funnel optimization across multiple inventory sources. For more details, please visit: https://www.ultraplatform.io/ Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Do you want to be a leader in the team that takes Transportation and Retail models to the next generation? Do you have a solid analytical thinking, metrics driven decision making and want to solve problems with solutions that will meet the growing worldwide need? Then Transportation is the team for you. We are looking for top notch Data Engineers to be part of our world class Business Intelligence for Transportation team. 4-7 years of experience performing quantitative analysis, preferably for an Internet or Technology company Strong experience in Data Warehouse and Business Intelligence application development Data Analysis: Understand business processes, logical data models and relational database implementations Expert knowledge in SQL. Optimize complex queries. Basic understanding of statistical analysis. Experience in testing design and measurement. Able to execute research projects, and generate practical results and recommendations Proven track record of working on complex modular projects, and assuming a leading role in such projects Highly motivated, self-driven, capable of defining own design and test scenarios Experience with scripting languages, i.e. Perl, Python etc. preferred BS/MS degree in Computer Science Evaluate and implement various big-data technologies and solutions (Redshift, Hive/EMR, Tez, Spark) to optimize processing of extremely large datasets in an accurate and timely fashion. Experience with large scale data processing, data structure optimization and scalability of algorithms a plus Key job responsibilities Responsible for designing, building and maintaining complex data solutions for Amazon's Operations businesses Actively participates in the code review process, design discussions, team planning, operational excellence, and constructively identifies problems and proposes solutions Makes appropriate trade-offs, re-use where possible, and is judicious about introducing dependencies Makes efficient use of resources (e.g., system hardware, data storage, query optimization, AWS infrastructure etc.) Knows about recent advances in distributed systems (e.g., MapReduce, MPP Architectures, External Partitioning) Asks correct questions when data model and requirements are not well defined and comes up with designs which are scalable, maintainable and efficient Makes enhancements that improve team’s data architecture, making it better and easier to maintain (e.g., data auditing solutions, automating, ad-hoc or manual operation steps) Owns the data quality of important datasets and any new changes/enhancements Basic Qualifications 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2941103 Show more Show less

Posted 2 weeks ago

Apply

3.0 years

4 - 9 Lacs

Bengaluru

On-site

GlassDoor logo

Level Up Your Career with Zynga! At Zynga, we bring people together through the power of play. As a global leader in interactive entertainment and a proud label of Take-Two Interactive, our games have been downloaded over 6 billion times—connecting players in 175+ countries through fun, strategy, and a little friendly competition. From thrilling casino spins to epic strategy battles, mind-bending puzzles, and social word challenges, our diverse game portfolio has something for everyone. Fan-favorites and latest hits include FarmVille™, Words With Friends™, Zynga Poker™, Game of Thrones Slots Casino™, Wizard of Oz Slots™, Hit it Rich! Slots™, Wonka Slots™, Top Eleven™, Toon Blast™, Empires & Puzzles™, Merge Dragons!™, CSR Racing™, Harry Potter: Puzzles & Spells™, Match Factory™, and Color Block Jam™—plus many more! Founded in 2007 and headquartered in California, our teams span North America, Europe, and Asia, working together to craft unforgettable gaming experiences. Whether you're spinning, strategizing, matching, or competing, Zynga is where fun meets innovation—and where you can take your career to the next level. Join us and be part of the play! TBD We are seeking experienced and passionate engineers to join our collaborative and innovative team. Zynga’s mission is to “Connect the World through Games” by building a truly social experience that makes the world a better place. The ideal candidate needs to have a strong focus on building high-quality, maintainable software that has global impact. The Analytics Engineering team is responsible for all things data at Zynga. We own the full game and player data pipeline - from ingestion to storage to driving insights and analytics. As a Data Engineer, you will be responsible for the software design and development of quality services and products to support the Analytics needs of our games. In this role, you will be part of our Analytics Engineering group focusing on advanced technology developments for building scalable data infrastructure and end-to-end services which can be leveraged by the various games. We are a 120+ organization servicing 1500 others across 13 global locations. Your responsibilities will include Build and operate a multi PB-scale data platform. Design, code, and develop new features/fix bugs/enhancements to systems and data pipelines (ETLs) while adhering to the SLA. Identifying anomalies, inconsistencies in data sets and algorithms and flagging it to the relevant team and / or fixing the bugs in the data workflows where applicable. Follow the best engineering methodologies towards ensuring performance, reliability, scalability, and measurability. Collaborate effectively with teammates, contributing to an innovative environment of technical excellence. You will be a perfect fit if you have Bachelor’s degree in Computer Science, or a related technical discipline (or equivalent). 3+ years of strong data engineering design/development experience in building large-scale, distributed data platforms/products. Advanced coding expertise in SQL & Python/JVM-based language. Exposure to heterogeneous data storage systems like relational, NoSQL, in-memory etc. Knowledge of data modeling, lineage, data access and its governance. Proficient in AWS services like Redshift, Kinesis, Lambda, RDS, EKS/ECS etc. Exposure to open source software, frameworks and broader powerful technologies (Airflow, Kafka, DataHub etc). Shown ability to deliver work on time with attention to quality. Excellent written and spoken communication skills and ability to work optimally in a geographically distributed team environment. We encourage you to apply even if you don’t meet every single requirement. Your unique perspective and experience could be exactly what we’re looking for. We are proud to be an equal opportunity employer, which means we are committed to creating and celebrating diverse thoughts, cultures, and backgrounds throughout our organization. Employment with us is based on substantive ability, objective qualifications, and work ethic – not an individual’s race, creed, color, religion, sex or gender, gender identity or expression, sexual orientation, national origin or ancestry, alienage or citizenship status, physical or mental disability, pregnancy, age, genetic information, veteran status, marital status, status as a victim of domestic violence or sex offenses, reproductive health decision, or any other characteristics protected by applicable law. As an equal opportunity employer, we are committed to providing the necessary support and accommodation to qualified individuals with disabilities, health conditions, or impairments (subject to any local qualifying requirements) to ensure their full participation in the job application or interview process. Please contact us at accommodationrequest@zynga.com to request any accommodations or for support related to your application for an open position. Please be aware that Zynga does not conduct job interviews or make job offers over third-party messaging apps such as Telegram, WhatsApp, or others. Zynga also does not engage in any financial exchanges during the recruitment or onboarding process, and will never ask a candidate for their personal or financial information over an app or other unofficial chat channel. Any attempt to do so may be the result of a scamp or phishing attack, and you should not engage. Zynga’s in-house recruitment team will only contact individuals through their official Company email addresses (i.e., via a zynga.com, naturalmotion.com, smallgiantgames.com, themavens.com, gram.gs email domain).

Posted 2 weeks ago

Apply

50.0 years

9 - 9 Lacs

Pune

On-site

GlassDoor logo

About Data Axle: Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for 50 years in the US. Data Axle has set a strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and leveraging proprietary business & consumer databases. Data Axle is headquartered in Dallas, TX, USA. Roles & Responsibilities: We are looking for a Data Engineer who will design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power. Design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies. Creation and support of real-time data pipelines built on AWS technologies including Glue, Redshift/Spectrum, Kinesis, EMR and Athena. Continual research of the latest big data and visualization technologies to provide new capabilities and increase efficiency. Working closely with team members to drive real-time model implementations for monitoring and alerting of risk systems. Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning. Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Requirements: 3-5 + years of industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets. Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related technical discipline. Demonstrated strength in data modeling, ETL development, and data warehousing. Experience using big data processing technology using Spark. Knowledge of data management fundamentals and data storage principles. Experience using business intelligence reporting tools (Tableau, Business Objects, Cognos, Power BI etc.). Experience working with AWS big data technologies (Redshift, S3, EMR, Spark). Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience working with distributed systems as it pertains to data storage and computing. Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

20 - 35 Lacs

Pune

On-site

GlassDoor logo

WE ARE LOOKING FOR IMMEDIATE JOINERS OR CAN JOIN WITHIN 30 DAYS FROM AI/ML BACKGROUND Location: Kharadi, Pune, and Bangalore Experience: 4 to 7 years We are looking for a Data Scientist with strong experience in automation, data processing, and applied machine learning. This role will focus on building intelligent solutions using Python, R, SQL, and cloud technologies to drive automation, analytics, and sustainability-focused initiatives. Key Responsibilities Design, build, and maintain data pipelines and architectures for scalable analytics Analyze large, complex datasets to extract actionable insights Develop and deploy machine learning models and LLMs for predictive and NLP use cases Lead automation projects using Python, SQL, Excel macros/VBA , and APIs Implement ETL workflows and ensure high data quality and reliability Perform data cleaning, preprocessing, and feature engineering Collaborate with stakeholders to support ESG and sustainability data initiatives Create visualizations and dashboards using Tableau or Power BI Desired Skills & Qualifications Proficient in Python and R for data analysis, modeling, and automation Hands-on experience working with machine learning models, including LLMs Strong expertise in SQL and NoSQL for data querying and management Advanced knowledge of Excel , including macros and VBA scripting Experience working with APIs for data integration and process automation Familiarity with cloud platforms (especially AWS , Redshift, SQL Server) Experience with data visualization tools like Tableau or Power BI Understanding of ESG metrics and experience working with sustainability data (preferred) Job Types: Full-time, Permanent Pay: ₹2,000,000.00 - ₹3,500,000.00 per year Application Question(s): What is your Current CTC? What is your Expected CTC? What is your Notice Period? Experience: Python: 3 years (Required) SQL: 4 years (Required) Tableau: 1 year (Preferred) Power BI: 1 year (Preferred) NoSQL: 4 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

6.0 years

8 - 10 Lacs

Noida

On-site

GlassDoor logo

Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description BI Solutioning & Data Engineering Design, build, and manage end-to-end Business Intelligence solutions, integrating structured and unstructured data from internal and external sources. Architect and maintain scalable data pipelines using cloud-native services (e.g., AWS, Azure, GCP). Implement ETL/ELT processes to ensure data quality, transformation, and availability for analytics and reporting. Market Intelligence & Analytics Enablement Support the Market Intelligence team by building dashboards, visualizations, and data models that reflect competitive, market, and customer insights. Work with research analysts to convert qualitative insights into measurable datasets. Drive the automation of insight delivery, enabling real-time or near real-time updates. Visualization & Reporting Design interactive dashboards and executive-level visual reports using tools such as Power BI, or Tableau. Maintain data storytelling standards to deliver clear, compelling narratives aligned with strategic objectives. Stakeholder Collaboration Act as a key liaison between business users, strategy teams, research analysts, and IT/cloud engineering. Translate analytical and research needs into scalable, sustainable BI solutions. Educate internal stakeholders on the capabilities of BI platforms and insights delivery pipelines Preferred: Cloud Infrastructure & Data Integration Collaborate with cloud engineering teams to deploy BI tools and data lakes in a cloud environment. Ensure data warehousing architecture is aligned with market research and analytics needs. Optimize data models and storage for scalability, performance, and security. Total Experience Expected: 06-09 years Qualifications Must Bachelor’s/Master’s degree in Computer Science, Data Science, Business Analytics, or a related technical field. 6+ years of experience in Business Intelligence, Data Engineering, or Cloud Data Analytics. Proficiency in SQL, Python, or data wrangling languages. Deep knowledge of BI tools like Power BI, Tableau, or QlikView. Strong data modeling, ETL, and data governance capabilities. Preferred Solid understanding of cloud platforms (AWS, Azure, GCP), with hands-on experience in cloud-based data warehouses (e.g., Snowflake, Redshift, BigQuery) Exposure to market intelligence, competitive analysis, or strategic analytics is highly desirable. Excellent communication, stakeholder management, and visualization/storytelling skills. Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Looking for 7+ years of experience Senior Data engineers/ Data Architects Location: Chennai/Hyderabad Notice Period: Immediate to 30 days (ONLY) Mandate Key skills: AWS, Databricks, Python, Pyspark, SQL 1. Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. 2. Data Integration and Transformation: Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). 3. Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. 4. Automation and Workflow Management: Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. 5. Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. 6. Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. 7. Migration and Upgrades: Lead migrations from legacy data systems to modern cloud-based platforms, ensuring smooth transitions and enhanced scalability. 8. Cost Optimization: Implement strategies for reducing cloud infrastructure costs, such as optimising resource usage, setting up lifecycle policies, and automating cost alerts. 9. Data Security and Compliance : Ensure secure access to data by implementing IAM roles and policies, adhering to data security best practices, and enforcing compliance with organizational standards. 10. Collaboration and Support: Work closely with data scientists, analysts, and business teams to understand data requirements and provide support for data-related tasks. Show more Show less

Posted 2 weeks ago

Apply

4.0 - 6.0 years

8 - 15 Lacs

Jaipur

Remote

GlassDoor logo

Senior Data Engineer Kadel Labs is a leading IT services company delivering top-quality technology solutions since 2017, focused on enhancing business operations and productivity through tailored, scalable, and future-ready solutions. With deep domain expertise and a commitment to innovation, we help businesses stay ahead of technological trends. As a CMMI Level 3 and ISO 27001:2022 certified company, we ensure best-in-class process maturity and information security, enabling organizations to achieve their digital transformation goals with confidence and efficiency. Role: Senior Data Engineer Experience: 4-6 Yrs Location: Udaipur , Jaipur,Kolkata Job Description: We are looking for a highly skilled and experienced Data Engineer with 4–6 years of hands-on experience in designing and implementing robust, scalable data pipelines and infrastructure. The ideal candidate will be proficient in SQL and Python and have a strong understanding of modern data engineering practices. You will play a key role in building and optimizing data systems, enabling data accessibility and analytics across the organization, and collaborating closely with cross-functional teams including Data Science, Product, and Engineering. Key Responsibilities: ·Design,develop, and maintain scalable ETL/ELT data pipelines using SQL and Python · Collaborate with data analysts, data scientists, and product teams to understand data needs · Optimize queries and data models for performance and reliability · Integrate data from various sources, including APIs, internal databases, and third-party systems · Monitor and troubleshoot data pipelines to ensure data quality and integrity · Document processes, data flows, and system architecture · Participate in code reviews and contribute to a culture of continuous improvement Required Skills: ·4–6 years of experience in data engineering, data architecture, or backend development with a focus on data · Strong command of SQL for data transformation and performance tuning · Experience with Python (e.g., pandas, Spark, ADF) · Solid understanding of ETL/ELT processes and data pipeline orchestration · Proficiency with RDBMS (e.g., PostgreSQL, MySQL, SQL Server) · Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) · Familiarity with version control (Git), CI/CD workflows, and containerized environments (Docker, Kubernetes) · Basic Programming Skills · Excellent problem-solving skills and a passion for clean, efficient data systems Preferred Skills: ·Experience with cloud platforms (AWS, Azure, GCP) and services like S3, Glue, Dataflow, etc. · Exposure to enterprise solutions (e.g., Databricks, Synapse) · Knowledge of big data technologies (e.g., Spark, Kafka, Hadoop) · Background in real-time data streaming and event-driven architectures · Understanding of data governance, security, and compliance best practices · Prior experience working in agile development environment Educational Qualifications: ·Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm Job Types: Full-time, Permanent Pay: ₹826,249.60 - ₹1,516,502.66 per year Benefits: Flexible schedule Health insurance Leave encashment Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Performance bonus Quarterly bonus Yearly bonus Ability to commute/relocate: Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Required) Experience: Data Engineer: 4 years (Required) Location: Jaipur, Rajasthan (Required) Work Location: In person

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Data Engineer Location: Chennai Experience Level: 3-6 Years Employment Type: Full-time About Us: SuperOps is a SaaS start-up empowering IT service providers and IT teams around the world with technology that is cutting-edge, future-ready, and powered by AI. We are backed by marquee investors like Addition, March Capital, Matrix Partners India, Elevation Capital, and Tanglin Venture Partners. Founded by Arvind Parthiban, a serial entrepreneur, and Jayakumar Karumbasalam, a veteran in the IT space, SuperOps is built on the back of a team of engineers, product architects, designers, and AI experts, who want to reshape the world of IT. Now we have taken on a market that is plagued by legacy solutions and subpar experiences. The potential to do something great is immense. So if you love to grow, be part of a kickass team that inspires you to do more, and make an everlasting mark in the world of IT, SuperOps is the place to be. We also believe that the journey is as important as the destination. We want to build the best products out there and have fun while doing so. So come, and be part of our A-star team of superheroes. We are looking for a talented Senior Front-End Engineer to join our engineering team. As a senior member of our team, you will be responsible for creating responsive, efficient, and engaging user interfaces for our platform. Role Summary: We are seeking a skilled and motivated Data Engineer to join our growing team. In this role, you will be instrumental in designing, building, and maintaining our data infrastructure, ensuring that reliable and timely data is available for analysis across the organization. You will work closely with various teams to integrate data from diverse sources and transform it into actionable insights that drive our business forward. Key Responsibilities: Design, develop, and maintain scalable and robust data pipelines to ingest data from various sources, including CRM systems (e.g., Salesforce), Billing platforms, Product Analytics tools (e.g., Mixpanel, Amplitude), and Marketing platforms (e.g., Google Ads, Hubspot). Build, manage, and optimize our data warehouse to serve as the central repository for all business-critical data. Implement and manage efficient data synchronization processes between source systems and the data warehouse. Oversee the storage and management of raw data, ensuring data integrity and accessibility. Develop and maintain data transformation pipelines (ETL/ELT) to process raw data into clean, structured formats suitable for analytics, reporting, and dashboarding. Ensure seamless synchronization and consistency between raw and processed data layers. Collaborate with data analysts, product managers, and other stakeholders to understand data needs and deliver appropriate data solutions. Monitor data pipeline performance, troubleshoot issues, and implement improvements for efficiency and reliability. Document data processes, architectures, and definitions. Qualifications: Proven experience as a Data Engineer for 5 to 8 years of experience Strong experience in designing, building, and maintaining data pipelines and ETL/ELT processes. Proficiency with data warehousing concepts and technologies (e.g., BigQuery, Redshift, Snowflake, Databricks). Experience integrating data from various APIs and databases (SQL, NoSQL). Solid understanding of data modeling principles. Proficiency in programming languages commonly used in data engineering (e.g., Python, SQL). Experience with workflow orchestration tools (e.g., Airflow, Prefect, Dagster). Familiarity with cloud platforms (e.g., AWS, GCP, Azure). Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Bonus Points: Experience working in a SaaS company. Understanding of key SaaS business metrics (e.g., MRR, ARR, Churn, LTV, CAC). Experience with data visualization tools (e.g., Tableau, Looker, Power BI). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Why Join Us? Impact: You'll work on a product that is revolutionising IT service management for MSPs and IT teams worldwide. Growth: SuperOps is growing rapidly, and there are ample opportunities for career progression and leadership roles. Collaboration: Work with talented engineers, designers, and product managers in a supportive and innovative environment Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Wissen Technology is Hiring for Python + Data Engineer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges Role Overview: We are seeking a skilled and innovative Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes. Experience: 5-9 Years Location: Bangalore Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills: Professional Experience: 5+ years of experience in data engineering or a related field. Programming: Strong proficiency in Python, with experience in libraries like pandas, pyspark, or boto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: -AWS Glue for ETL/ELT. -S3 for storage. -Redshift or Athena for data warehousing and querying. -Lambda for serverless compute. -Kinesis or SNS/SQS for data streaming. -IAM Roles for security. Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline. Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work® Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie. Website: www.wissen.com LinkedIn: https://www.linkedin.com/company/wissen-technology Wissen Leadership: https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership: https://www.wissen.com/articles/ Employee Speak: https://www.ambitionbox.com/overview/wissen-technology-overview https://www.glassdoor.com/Reviews/Wissen-Infotech-Reviews-E287365.htm Great Place to Work: https://www.wissen.com/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-india/ https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-6935459546131763200-xF2k About Wissen Interview Process:https://www.wissen.com/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-whom-we-hire/ Latest in Wissen in CIO Insider: https://www.cioinsiderindia.com/vendor/wissen-technology-setting-new-benchmarks-in-technology-consulting-cid-1064.html Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Spark Scala+AWS+SQL Developer (SA/M) A Spark Scala+AWS+SQL Developer is responsible for building and maintaining distributed data processing systems using Apache Spark and Scala, leveraging AWS cloud services for scalable and efficient data solutions. The role involves developing ETL/ELT pipelines, optimizing Spark jobs, and crafting complex SQL queries for data transformation and analysis. Collaboration with teams, ensuring data quality, and adhering to best coding practices are essential aspects of the role. Core skills include: ? Proficiency in Apache Spark and Scala programming. ? Expertise in SQL for database management and optimization. ? Experience with AWS services like S3, EMR, Glue, and Redshift. ? Knowledge of data warehousing, data lakes, and big data tools. The position suits those passionate about data engineering and looking to work in dynamic and cloud-based environments! Let me know if you'd like a detailed description or tips for preparing for such a role. Key Responsibilities: ? Data Pipeline Development: ? Cloud-based Solutions: ? Data Processing & Transformation: ? Performance Optimization: ? Collaboration & Communication: ? Data Quality & Security: ? Continuous Improvement: Skills and Knowledge: 1.Apache Spark: o Proficiency in creating distributed data processing pipelines. o Hands-on experience with Spark components like RDDs, DataFrames, Datasets, and Spark Streaming. 2.Scala Programming: o Expertise in Scala for developing Spark applications. o Knowledge of functional programming concepts. 3.AWS Services: o Familiarity with key AWS tools like S3, EMR, Glue, Lambda, Redshift, and Athena. o Ability to design, deploy, and manage cloud-based solutions. 4.SQL Expertise: o Ability to write complex SQL queries for data extraction, transformation, and reporting. o Experience in query optimization and database performance tuning. 5.Data Engineering: o Skills in building ETL/ELT pipelines for seamless data flow. o Understanding of data lakes, data warehousing, and data modeling. 6.Big Data Ecosystem: o Knowledge of Hadoop, Kafka, and other big data tools (optional but beneficial). 7.Version Control and CI/CD: o Proficiency in Git for version control. o Experience in continuous integration and deployment pipelines. 8.Performance Tuning: o Expertise in optimizing Spark jobs and SQL queries for efficiency. Soft Skills: ? Strong problem-solving abilities. ? Effective communication and collaboration skills. ? Attention to detail and adherence to coding best practices. Domain Knowledge: ? Familiarity with data governance and security protocols. ? Understanding of business intelligence and analytics requirements Skills Required RoleSpark Scala+AWS+SQL Developer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates-B.Tech Employment TypeFull Time, Permanent Key Skills APACHE SPARK SCALA PROGRAMMING SQL EXPERTISE AWS SERVICES ETL/ELT PIPELINES. Other Information Job CodeGO/JC/21445/2025 Recruiter NameSPriya Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Responsibilities Create, implement and operate the strategy for robust and scalable data pipelines for business intelligence and machine learning. Develop and maintain core data framework and key infrastructures Create and support the ETL pipeline to get the data flowing correctly from the existing and new sources to our data warehouse. Data Warehouse design and data modeling for efficient and cost-effective reporting Collaborate with data analysts, data scientists, and other data consumers within the business to manage the data warehouse table structure and optimize it for reporting. Constantly striving to improve software development process and team productivity Define and implement Data Governance processes related to data discovery, lineage, access control and quality assurance Perform code reviews and QA data imported by various processes Qualifications 3-5 years of experience. At least 2+ years of experience in data engineering and data infrastructure space on any of the big data technologies: Hive, Spark, Pyspark(Batch and Streaming), Airflow, Redshift and Delta Lake. Experience in product-based companies or startups. Strong understanding of data warehousing concepts and the data ecosystem. Strong Design/Architecture experience architecting, developing, and maintaining solutions in AWS. Experience building data pipelines and managing the pipelines after they’re deployed. Experience with building data pipelines from business applications using APIs. Previous experience in Databricks is a big plus. Understanding of Dev Ops would be preferable though not a must Working knowledge of BI Tools like Metabase, and Power BI is a plus Experience of architecting systems for data access is a major plus. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About Us We're building the world’s first AI Super-Assistant purpose-built for enterprises and professionals. Our platform is designed to supercharge productivity, automate workflows, and redefine the way teams work with AI. Our two core products: ChatLLM – Designed for professionals and small teams, offering conversational AI tailored for everyday productivity. Enterprise Platform – A robust, secure, and highly customizable platform for organizations seeking to integrate AI into every facet of their operations. We’re on a mission to redefine enterprise AI – and we’re looking for engineers ready to build the connective tissue between AI and the systems that power modern business. Role: Connector Integration Engineer – Databases & Warehouses As a Connector Integration Engineer focused on data infrastructure, you’ll lead the development and optimization of connectors to enterprise databases and cloud data warehouses. You’ll play a critical role in helping our AI systems securely query, retrieve, and transform large-scale structured data across multiple platforms. What You’ll Do Build and maintain connectors to data platforms such as: BigQuery Snowflake Redshift and other JDBC-compliant databases Work with APIs, SDKs, and data drivers to enable scalable data access Implement secure, token-based access flows using IAM roles and OAuth2 Collaborate with AI and product teams to define data extraction and usage models Optimize connectors for query performance, load handling, and schema compatibility Write well-documented, testable, and reusable backend code Monitor and troubleshoot connectivity and performance issues What We’re Looking For Proficiency in building connectors for Snowflake, BigQuery, and JDBC-based data systems Solid understanding of SQL, API integrations, and cloud data warehouse patterns Experience with IAM, KMS, and secure authentication protocols (OAuth2, JWT) Strong backend coding skills in Python, TypeScript, or similar Ability to analyze schemas, debug query issues, and support high-volume pipelines Familiarity with RESTful services, data transformation, and structured logging Comfortable working independently on a distributed team Nice to Have Experience with Redshift, Postgres, or Databricks Familiarity with enterprise compliance standards (SOC 2, ISO 27001) Previous work in data engineering, SaaS, or B2B analytics products Background in high-growth tech companies or top-tier universities encouraged What We Offer Remote-first work environment Opportunity to shape the future of AI in the enterprise Work with a world-class team of AI researchers and product builders Flat team structure with real impact on product and direction $60,000 USD annual salary Ready to connect enterprise data to cutting-edge AI workflows? Join us – and help power the world’s first AI Super-Assistant. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Spark Scala+AWS+SQL Developer (SA/M) A Spark Scala+AWS+SQL Developer is responsible for building and maintaining distributed data processing systems using Apache Spark and Scala, leveraging AWS cloud services for scalable and efficient data solutions. The role involves developing ETL/ELT pipelines, optimizing Spark jobs, and crafting complex SQL queries for data transformation and analysis. Collaboration with teams, ensuring data quality, and adhering to best coding practices are essential aspects of the role. Core skills include: ? Proficiency in Apache Spark and Scala programming. ? Expertise in SQL for database management and optimization. ? Experience with AWS services like S3, EMR, Glue, and Redshift. ? Knowledge of data warehousing, data lakes, and big data tools. The position suits those passionate about data engineering and looking to work in dynamic and cloud-based environments! Let me know if you'd like a detailed description or tips for preparing for such a role. Key Responsibilities: ? Data Pipeline Development: ? Cloud-based Solutions: ? Data Processing & Transformation: ? Performance Optimization: ? Collaboration & Communication: ? Data Quality & Security: ? Continuous Improvement: Skills and Knowledge: 1.Apache Spark: o Proficiency in creating distributed data processing pipelines. o Hands-on experience with Spark components like RDDs, DataFrames, Datasets, and Spark Streaming. 2.Scala Programming: o Expertise in Scala for developing Spark applications. o Knowledge of functional programming concepts. 3.AWS Services: o Familiarity with key AWS tools like S3, EMR, Glue, Lambda, Redshift, and Athena. o Ability to design, deploy, and manage cloud-based solutions. 4.SQL Expertise: o Ability to write complex SQL queries for data extraction, transformation, and reporting. o Experience in query optimization and database performance tuning. 5.Data Engineering: o Skills in building ETL/ELT pipelines for seamless data flow. o Understanding of data lakes, data warehousing, and data modeling. 6.Big Data Ecosystem: o Knowledge of Hadoop, Kafka, and other big data tools (optional but beneficial). 7.Version Control and CI/CD: o Proficiency in Git for version control. o Experience in continuous integration and deployment pipelines. 8.Performance Tuning: o Expertise in optimizing Spark jobs and SQL queries for efficiency. Soft Skills: ? Strong problem-solving abilities. ? Effective communication and collaboration skills. ? Attention to detail and adherence to coding best practices. Domain Knowledge: ? Familiarity with data governance and security protocols. ? Understanding of business intelligence and analytics requirements Skills Required RoleSpark Scala+AWS+SQL Developer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates-B.Tech Employment TypeFull Time, Permanent Key Skills APACHE SPARK SCALA PROGRAMMING SQL EXPERTISE AWS SERVICES ETL/ELT PIPELINES. Other Information Job CodeGO/JC/21445/2025 Recruiter NameSPriya Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

Job Summary: We are looking for a skilled and motivated Data Engineer to join our growing data team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support analytics, reporting, and machine learning initiatives. You will work closely with data analysts, data scientists, and software engineers to ensure reliable access to high-quality data across the organization. Key Responsibilities: Design, develop, and maintain robust and scalable data pipelines and ETL/ELT processes. Build and optimize data architectures to support data warehousing, batch processing, and real-time data streaming. Collaborate with data scientists, analysts, and other engineers to deliver high-impact data solutions. Ensure data quality, consistency, and security across all systems. Manage and monitor data workflows to ensure high availability and performance. Develop tools and frameworks to automate data ingestion, transformation, and validation. Participate in data modeling and architecture discussions for both transactional and analytical systems. Maintain documentation of data flows, architecture, and related processes. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field. Strong programming skills in Python, Java, or Scala. Proficient in SQL and experience working with relational databases (e.g., PostgreSQL, MySQL). Experience with big data tools and frameworks (e.g., Hadoop, Spark, Kafka). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and services like S3, Redshift, BigQuery, or Azure Data Lake. Hands-on experience with data pipeline orchestration tools (e.g., Airflow, Luigi). Experience with data warehousing and data modeling best practices. Preferred Qualifications: Experience with CI/CD for data pipelines. Knowledge of containerization and orchestration tools like Docker and Kubernetes. Experience with real-time data processing technologies (e.g., Apache Flink, Kinesis). Familiarity with data governance and security practices. Exposure to machine learning pipelines is a plus. Show more Show less

Posted 2 weeks ago

Apply

5.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Responsibilities / Qualifications Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Glue Data Catalog, Lake formation, Apache Airflow, Lambda, etc Experience with development of data governance framework including the management of data, operating model, data policies and standards. Experience with orchestration of workflows in an enterprise environment. Working experience with Agile Methodology Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. About The Team Come be a part of something big. If you want to be a part of building something big that will drive value throughout the entire global organization, then this is the opportunity for you. You will be working on top priority initiatives that span new and existing technologies - all to deliver outstanding results and experiences for our customers and employees. The Enterprise Data Services organization in Business Technology takes pride in enabling data driven business outcomes to spearhead Workday’s growth through trusted data excellence, innovation and architecture thought leadership. Our organization is responsible for developing and supporting Data Warehousing, Data Ingestion and Integration Services, Master Data Management (MDM), Data Quality Assurance, and the deployment of cutting-edge Advanced Analytics and Machine Learning solutions tailored to enhance multiple business sectors such as Sales, Marketing, Services, Support, and Customer Engagement. Our team harnesses the power of top-tier modern cloud platforms and services, including AWS, Databricks, Snowflake, Reltio, Tableau, Snaplogic, and MongoDB, complemented by a suite of AWS-native technologies like Spark, Airflow, Redshift, Sagemaker, and Kafka. These tools are pivotal in our drive to create robust data ecosystems that empower our business operations with precision and scalability. EDS is a global team distributed across the U.S, India and Canada. About The Role Join a pioneering organization at the forefront of technological advancement, dedicated to demonstrating data-driven insights to transform industries and drive innovation. We are actively seeking a skilled Data Platform and Support Engineer who will play a pivotal role in ensuring the smooth functioning of our data infrastructure, enabling self-service analytics, and empowering analytical teams across the organization. As a Data Platform and Support Engineer, you will oversee the management of our enterprise data hub, working alongside a team of dedicated data and software engineers to build and maintain a robust data ecosystem that drives decision-making at scale for internal analytical applications. You will play a key role in ensuring the availability, reliability, and performance of our data infrastructure and systems. You will be responsible for monitoring, maintaining, and optimizing data systems, providing technical support, and implementing proactive measures to enhance data quality and integrity. This role requires advanced technical expertise, problem-solving skills, and a strong commitment to delivering high-quality support services. The team is responsible for supporting Data Services, Data Warehouse, Analytics, Data Quality and Advanced Analytics/ML for multiple business functions including Sales, Marketing, Services, Support and Customer Experience. We demonstrate leading modern cloud platforms like AWS, Reltio, Snowflake,Tableau, Snaplogic, MongoDB in addition to the native AWS technologies like Spark, Airflow, Redshift, Sagemaker and Kafka. Job Responsibilities : Monitor the health and performance of data systems, including databases, data warehouses, and data lakes. Conduct root cause analysis and implement corrective actions to prevent recurrence of issues. Manage and optimize data infrastructure components such as servers, storage systems, and cloud services. Develop and implement data quality checks, validation rules, and data cleansing procedures. Implement security controls and compliance measures to protect sensitive data and ensure regulatory compliance. Design and implement data backup and recovery strategies to safeguard data against loss or corruption. Optimize the performance of data systems and processes by tuning queries, optimizing storage, and improving ETL pipeline efficiency. Maintain comprehensive documentation, runbooks, and fix guides for data systems and processes. Collaborate with multi-functional teams, including data engineers, data scientists, business analysts, and IT operations. Lead or participate in data-related projects, such as system migrations, upgrades, or expansions. Deliver training and mentorship to junior team members, sharing knowledge and standard methodologies to support their professional development. Participate in rotational shifts, including on-call rotations and coverage during weekends and holidays as required, to provide 24/7 support for data systems, responding to and resolving data-related incidents in a timely manner Hands-on experience with source version control, continuous integration and experience with release/organizational change delivery tools. About You Basic Qualifications: 6+ years of experience designing and building scalable and robust data pipelines to enable data-driven decisions for the business. BE/Masters in computer science or equivalent is required Other Qualifications: Prior experience with CRM systems (e.g. Salesforce) is desirable Experience building analytical solutions to Sales and Marketing teams. Should have experience working on Snowflake ,Fivetran DBT and Airflow Experience with very large-scale data warehouse and data engineering projects. Experience developing low latency data processing solutions like AWS Kinesis, Kafka, Spark Stream processing. Should be proficient in writing advanced SQLs, Expertise in performance tuning of SQLs Experience working with AWS data technologies like S3, EMR, Lambda, DynamoDB, Redshift etc. Solid experience in one or more programming languages for processing of large data sets, such as Python, Scala. Ability to create data models, STAR schemas for data consuming. Extensive experience in troubleshooting data issues, analyzing end to end data pipelines and working with users in resolving issues Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process! , Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Spark Scala+AWS+SQL Developer (SA/M) A Spark Scala+AWS+SQL Developer is responsible for building and maintaining distributed data processing systems using Apache Spark and Scala, leveraging AWS cloud services for scalable and efficient data solutions. The role involves developing ETL/ELT pipelines, optimizing Spark jobs, and crafting complex SQL queries for data transformation and analysis. Collaboration with teams, ensuring data quality, and adhering to best coding practices are essential aspects of the role. Core skills include: ? Proficiency in Apache Spark and Scala programming. ? Expertise in SQL for database management and optimization. ? Experience with AWS services like S3, EMR, Glue, and Redshift. ? Knowledge of data warehousing, data lakes, and big data tools. The position suits those passionate about data engineering and looking to work in dynamic and cloud-based environments! Let me know if you'd like a detailed description or tips for preparing for such a role. Key Responsibilities: ? Data Pipeline Development: ? Cloud-based Solutions: ? Data Processing & Transformation: ? Performance Optimization: ? Collaboration & Communication: ? Data Quality & Security: ? Continuous Improvement: Skills and Knowledge: 1.Apache Spark: o Proficiency in creating distributed data processing pipelines. o Hands-on experience with Spark components like RDDs, DataFrames, Datasets, and Spark Streaming. 2.Scala Programming: o Expertise in Scala for developing Spark applications. o Knowledge of functional programming concepts. 3.AWS Services: o Familiarity with key AWS tools like S3, EMR, Glue, Lambda, Redshift, and Athena. o Ability to design, deploy, and manage cloud-based solutions. 4.SQL Expertise: o Ability to write complex SQL queries for data extraction, transformation, and reporting. o Experience in query optimization and database performance tuning. 5.Data Engineering: o Skills in building ETL/ELT pipelines for seamless data flow. o Understanding of data lakes, data warehousing, and data modeling. 6.Big Data Ecosystem: o Knowledge of Hadoop, Kafka, and other big data tools (optional but beneficial). 7.Version Control and CI/CD: o Proficiency in Git for version control. o Experience in continuous integration and deployment pipelines. 8.Performance Tuning: o Expertise in optimizing Spark jobs and SQL queries for efficiency. Soft Skills: ? Strong problem-solving abilities. ? Effective communication and collaboration skills. ? Attention to detail and adherence to coding best practices. Domain Knowledge: ? Familiarity with data governance and security protocols. ? Understanding of business intelligence and analytics requirements Skills Required RoleSpark Scala+AWS+SQL Developer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates-B.Tech Employment TypeFull Time, Permanent Key Skills APACHE SPARK SCALA PROGRAMMING SQL EXPERTISE AWS SERVICES ETL/ELT PIPELINES. Other Information Job CodeGO/JC/21445/2025 Recruiter NameSPriya Show more Show less

Posted 2 weeks ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies