Home
Jobs

2009 Redshift Jobs - Page 42

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

50.0 years

9 - 9 Lacs

Pune

On-site

About Data Axle: Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for 50 years in the US. Data Axle has set a strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and leveraging proprietary business & consumer databases. Data Axle is headquartered in Dallas, TX, USA. Roles & Responsibilities: We are looking for a Data Engineer who will design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power. Design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies. Creation and support of real-time data pipelines built on AWS technologies including Glue, Redshift/Spectrum, Kinesis, EMR and Athena. Continual research of the latest big data and visualization technologies to provide new capabilities and increase efficiency. Working closely with team members to drive real-time model implementations for monitoring and alerting of risk systems. Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning. Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Requirements: 3-5 + years of industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets. Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related technical discipline. Demonstrated strength in data modeling, ETL development, and data warehousing. Experience using big data processing technology using Spark. Knowledge of data management fundamentals and data storage principles. Experience using business intelligence reporting tools (Tableau, Business Objects, Cognos, Power BI etc.). Experience working with AWS big data technologies (Redshift, S3, EMR, Spark). Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience working with distributed systems as it pertains to data storage and computing. Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

20 - 35 Lacs

Pune

On-site

WE ARE LOOKING FOR IMMEDIATE JOINERS OR CAN JOIN WITHIN 30 DAYS FROM AI/ML BACKGROUND Location: Kharadi, Pune, and Bangalore Experience: 4 to 7 years We are looking for a Data Scientist with strong experience in automation, data processing, and applied machine learning. This role will focus on building intelligent solutions using Python, R, SQL, and cloud technologies to drive automation, analytics, and sustainability-focused initiatives. Key Responsibilities Design, build, and maintain data pipelines and architectures for scalable analytics Analyze large, complex datasets to extract actionable insights Develop and deploy machine learning models and LLMs for predictive and NLP use cases Lead automation projects using Python, SQL, Excel macros/VBA , and APIs Implement ETL workflows and ensure high data quality and reliability Perform data cleaning, preprocessing, and feature engineering Collaborate with stakeholders to support ESG and sustainability data initiatives Create visualizations and dashboards using Tableau or Power BI Desired Skills & Qualifications Proficient in Python and R for data analysis, modeling, and automation Hands-on experience working with machine learning models, including LLMs Strong expertise in SQL and NoSQL for data querying and management Advanced knowledge of Excel , including macros and VBA scripting Experience working with APIs for data integration and process automation Familiarity with cloud platforms (especially AWS , Redshift, SQL Server) Experience with data visualization tools like Tableau or Power BI Understanding of ESG metrics and experience working with sustainability data (preferred) Job Types: Full-time, Permanent Pay: ₹2,000,000.00 - ₹3,500,000.00 per year Application Question(s): What is your Current CTC? What is your Expected CTC? What is your Notice Period? Experience: Python: 3 years (Required) SQL: 4 years (Required) Tableau: 1 year (Preferred) Power BI: 1 year (Preferred) NoSQL: 4 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

6.0 years

8 - 10 Lacs

Noida

On-site

Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description BI Solutioning & Data Engineering Design, build, and manage end-to-end Business Intelligence solutions, integrating structured and unstructured data from internal and external sources. Architect and maintain scalable data pipelines using cloud-native services (e.g., AWS, Azure, GCP). Implement ETL/ELT processes to ensure data quality, transformation, and availability for analytics and reporting. Market Intelligence & Analytics Enablement Support the Market Intelligence team by building dashboards, visualizations, and data models that reflect competitive, market, and customer insights. Work with research analysts to convert qualitative insights into measurable datasets. Drive the automation of insight delivery, enabling real-time or near real-time updates. Visualization & Reporting Design interactive dashboards and executive-level visual reports using tools such as Power BI, or Tableau. Maintain data storytelling standards to deliver clear, compelling narratives aligned with strategic objectives. Stakeholder Collaboration Act as a key liaison between business users, strategy teams, research analysts, and IT/cloud engineering. Translate analytical and research needs into scalable, sustainable BI solutions. Educate internal stakeholders on the capabilities of BI platforms and insights delivery pipelines Preferred: Cloud Infrastructure & Data Integration Collaborate with cloud engineering teams to deploy BI tools and data lakes in a cloud environment. Ensure data warehousing architecture is aligned with market research and analytics needs. Optimize data models and storage for scalability, performance, and security. Total Experience Expected: 06-09 years Qualifications Must Bachelor’s/Master’s degree in Computer Science, Data Science, Business Analytics, or a related technical field. 6+ years of experience in Business Intelligence, Data Engineering, or Cloud Data Analytics. Proficiency in SQL, Python, or data wrangling languages. Deep knowledge of BI tools like Power BI, Tableau, or QlikView. Strong data modeling, ETL, and data governance capabilities. Preferred Solid understanding of cloud platforms (AWS, Azure, GCP), with hands-on experience in cloud-based data warehouses (e.g., Snowflake, Redshift, BigQuery) Exposure to market intelligence, competitive analysis, or strategic analytics is highly desirable. Excellent communication, stakeholder management, and visualization/storytelling skills. Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Looking for 7+ years of experience Senior Data engineers/ Data Architects Location: Chennai/Hyderabad Notice Period: Immediate to 30 days (ONLY) Mandate Key skills: AWS, Databricks, Python, Pyspark, SQL 1. Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. 2. Data Integration and Transformation: Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). 3. Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. 4. Automation and Workflow Management: Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. 5. Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. 6. Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. 7. Migration and Upgrades: Lead migrations from legacy data systems to modern cloud-based platforms, ensuring smooth transitions and enhanced scalability. 8. Cost Optimization: Implement strategies for reducing cloud infrastructure costs, such as optimising resource usage, setting up lifecycle policies, and automating cost alerts. 9. Data Security and Compliance : Ensure secure access to data by implementing IAM roles and policies, adhering to data security best practices, and enforcing compliance with organizational standards. 10. Collaboration and Support: Work closely with data scientists, analysts, and business teams to understand data requirements and provide support for data-related tasks. Show more Show less

Posted 2 weeks ago

Apply

4.0 - 6.0 years

8 - 15 Lacs

Jaipur

Remote

Senior Data Engineer Kadel Labs is a leading IT services company delivering top-quality technology solutions since 2017, focused on enhancing business operations and productivity through tailored, scalable, and future-ready solutions. With deep domain expertise and a commitment to innovation, we help businesses stay ahead of technological trends. As a CMMI Level 3 and ISO 27001:2022 certified company, we ensure best-in-class process maturity and information security, enabling organizations to achieve their digital transformation goals with confidence and efficiency. Role: Senior Data Engineer Experience: 4-6 Yrs Location: Udaipur , Jaipur,Kolkata Job Description: We are looking for a highly skilled and experienced Data Engineer with 4–6 years of hands-on experience in designing and implementing robust, scalable data pipelines and infrastructure. The ideal candidate will be proficient in SQL and Python and have a strong understanding of modern data engineering practices. You will play a key role in building and optimizing data systems, enabling data accessibility and analytics across the organization, and collaborating closely with cross-functional teams including Data Science, Product, and Engineering. Key Responsibilities: ·Design,develop, and maintain scalable ETL/ELT data pipelines using SQL and Python · Collaborate with data analysts, data scientists, and product teams to understand data needs · Optimize queries and data models for performance and reliability · Integrate data from various sources, including APIs, internal databases, and third-party systems · Monitor and troubleshoot data pipelines to ensure data quality and integrity · Document processes, data flows, and system architecture · Participate in code reviews and contribute to a culture of continuous improvement Required Skills: ·4–6 years of experience in data engineering, data architecture, or backend development with a focus on data · Strong command of SQL for data transformation and performance tuning · Experience with Python (e.g., pandas, Spark, ADF) · Solid understanding of ETL/ELT processes and data pipeline orchestration · Proficiency with RDBMS (e.g., PostgreSQL, MySQL, SQL Server) · Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) · Familiarity with version control (Git), CI/CD workflows, and containerized environments (Docker, Kubernetes) · Basic Programming Skills · Excellent problem-solving skills and a passion for clean, efficient data systems Preferred Skills: ·Experience with cloud platforms (AWS, Azure, GCP) and services like S3, Glue, Dataflow, etc. · Exposure to enterprise solutions (e.g., Databricks, Synapse) · Knowledge of big data technologies (e.g., Spark, Kafka, Hadoop) · Background in real-time data streaming and event-driven architectures · Understanding of data governance, security, and compliance best practices · Prior experience working in agile development environment Educational Qualifications: ·Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm Job Types: Full-time, Permanent Pay: ₹826,249.60 - ₹1,516,502.66 per year Benefits: Flexible schedule Health insurance Leave encashment Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Performance bonus Quarterly bonus Yearly bonus Ability to commute/relocate: Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Required) Experience: Data Engineer: 4 years (Required) Location: Jaipur, Rajasthan (Required) Work Location: In person

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Data Engineer Location: Chennai Experience Level: 3-6 Years Employment Type: Full-time About Us: SuperOps is a SaaS start-up empowering IT service providers and IT teams around the world with technology that is cutting-edge, future-ready, and powered by AI. We are backed by marquee investors like Addition, March Capital, Matrix Partners India, Elevation Capital, and Tanglin Venture Partners. Founded by Arvind Parthiban, a serial entrepreneur, and Jayakumar Karumbasalam, a veteran in the IT space, SuperOps is built on the back of a team of engineers, product architects, designers, and AI experts, who want to reshape the world of IT. Now we have taken on a market that is plagued by legacy solutions and subpar experiences. The potential to do something great is immense. So if you love to grow, be part of a kickass team that inspires you to do more, and make an everlasting mark in the world of IT, SuperOps is the place to be. We also believe that the journey is as important as the destination. We want to build the best products out there and have fun while doing so. So come, and be part of our A-star team of superheroes. We are looking for a talented Senior Front-End Engineer to join our engineering team. As a senior member of our team, you will be responsible for creating responsive, efficient, and engaging user interfaces for our platform. Role Summary: We are seeking a skilled and motivated Data Engineer to join our growing team. In this role, you will be instrumental in designing, building, and maintaining our data infrastructure, ensuring that reliable and timely data is available for analysis across the organization. You will work closely with various teams to integrate data from diverse sources and transform it into actionable insights that drive our business forward. Key Responsibilities: Design, develop, and maintain scalable and robust data pipelines to ingest data from various sources, including CRM systems (e.g., Salesforce), Billing platforms, Product Analytics tools (e.g., Mixpanel, Amplitude), and Marketing platforms (e.g., Google Ads, Hubspot). Build, manage, and optimize our data warehouse to serve as the central repository for all business-critical data. Implement and manage efficient data synchronization processes between source systems and the data warehouse. Oversee the storage and management of raw data, ensuring data integrity and accessibility. Develop and maintain data transformation pipelines (ETL/ELT) to process raw data into clean, structured formats suitable for analytics, reporting, and dashboarding. Ensure seamless synchronization and consistency between raw and processed data layers. Collaborate with data analysts, product managers, and other stakeholders to understand data needs and deliver appropriate data solutions. Monitor data pipeline performance, troubleshoot issues, and implement improvements for efficiency and reliability. Document data processes, architectures, and definitions. Qualifications: Proven experience as a Data Engineer for 5 to 8 years of experience Strong experience in designing, building, and maintaining data pipelines and ETL/ELT processes. Proficiency with data warehousing concepts and technologies (e.g., BigQuery, Redshift, Snowflake, Databricks). Experience integrating data from various APIs and databases (SQL, NoSQL). Solid understanding of data modeling principles. Proficiency in programming languages commonly used in data engineering (e.g., Python, SQL). Experience with workflow orchestration tools (e.g., Airflow, Prefect, Dagster). Familiarity with cloud platforms (e.g., AWS, GCP, Azure). Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Bonus Points: Experience working in a SaaS company. Understanding of key SaaS business metrics (e.g., MRR, ARR, Churn, LTV, CAC). Experience with data visualization tools (e.g., Tableau, Looker, Power BI). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Why Join Us? Impact: You'll work on a product that is revolutionising IT service management for MSPs and IT teams worldwide. Growth: SuperOps is growing rapidly, and there are ample opportunities for career progression and leadership roles. Collaboration: Work with talented engineers, designers, and product managers in a supportive and innovative environment Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Wissen Technology is Hiring for Python + Data Engineer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges Role Overview: We are seeking a skilled and innovative Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes. Experience: 5-9 Years Location: Bangalore Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills: Professional Experience: 5+ years of experience in data engineering or a related field. Programming: Strong proficiency in Python, with experience in libraries like pandas, pyspark, or boto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: -AWS Glue for ETL/ELT. -S3 for storage. -Redshift or Athena for data warehousing and querying. -Lambda for serverless compute. -Kinesis or SNS/SQS for data streaming. -IAM Roles for security. Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline. Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work® Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie. Website: www.wissen.com LinkedIn: https://www.linkedin.com/company/wissen-technology Wissen Leadership: https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership: https://www.wissen.com/articles/ Employee Speak: https://www.ambitionbox.com/overview/wissen-technology-overview https://www.glassdoor.com/Reviews/Wissen-Infotech-Reviews-E287365.htm Great Place to Work: https://www.wissen.com/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-india/ https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-6935459546131763200-xF2k About Wissen Interview Process:https://www.wissen.com/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-whom-we-hire/ Latest in Wissen in CIO Insider: https://www.cioinsiderindia.com/vendor/wissen-technology-setting-new-benchmarks-in-technology-consulting-cid-1064.html Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Spark Scala+AWS+SQL Developer (SA/M) A Spark Scala+AWS+SQL Developer is responsible for building and maintaining distributed data processing systems using Apache Spark and Scala, leveraging AWS cloud services for scalable and efficient data solutions. The role involves developing ETL/ELT pipelines, optimizing Spark jobs, and crafting complex SQL queries for data transformation and analysis. Collaboration with teams, ensuring data quality, and adhering to best coding practices are essential aspects of the role. Core skills include: ? Proficiency in Apache Spark and Scala programming. ? Expertise in SQL for database management and optimization. ? Experience with AWS services like S3, EMR, Glue, and Redshift. ? Knowledge of data warehousing, data lakes, and big data tools. The position suits those passionate about data engineering and looking to work in dynamic and cloud-based environments! Let me know if you'd like a detailed description or tips for preparing for such a role. Key Responsibilities: ? Data Pipeline Development: ? Cloud-based Solutions: ? Data Processing & Transformation: ? Performance Optimization: ? Collaboration & Communication: ? Data Quality & Security: ? Continuous Improvement: Skills and Knowledge: 1.Apache Spark: o Proficiency in creating distributed data processing pipelines. o Hands-on experience with Spark components like RDDs, DataFrames, Datasets, and Spark Streaming. 2.Scala Programming: o Expertise in Scala for developing Spark applications. o Knowledge of functional programming concepts. 3.AWS Services: o Familiarity with key AWS tools like S3, EMR, Glue, Lambda, Redshift, and Athena. o Ability to design, deploy, and manage cloud-based solutions. 4.SQL Expertise: o Ability to write complex SQL queries for data extraction, transformation, and reporting. o Experience in query optimization and database performance tuning. 5.Data Engineering: o Skills in building ETL/ELT pipelines for seamless data flow. o Understanding of data lakes, data warehousing, and data modeling. 6.Big Data Ecosystem: o Knowledge of Hadoop, Kafka, and other big data tools (optional but beneficial). 7.Version Control and CI/CD: o Proficiency in Git for version control. o Experience in continuous integration and deployment pipelines. 8.Performance Tuning: o Expertise in optimizing Spark jobs and SQL queries for efficiency. Soft Skills: ? Strong problem-solving abilities. ? Effective communication and collaboration skills. ? Attention to detail and adherence to coding best practices. Domain Knowledge: ? Familiarity with data governance and security protocols. ? Understanding of business intelligence and analytics requirements Skills Required RoleSpark Scala+AWS+SQL Developer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates-B.Tech Employment TypeFull Time, Permanent Key Skills APACHE SPARK SCALA PROGRAMMING SQL EXPERTISE AWS SERVICES ETL/ELT PIPELINES. Other Information Job CodeGO/JC/21445/2025 Recruiter NameSPriya Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Responsibilities Create, implement and operate the strategy for robust and scalable data pipelines for business intelligence and machine learning. Develop and maintain core data framework and key infrastructures Create and support the ETL pipeline to get the data flowing correctly from the existing and new sources to our data warehouse. Data Warehouse design and data modeling for efficient and cost-effective reporting Collaborate with data analysts, data scientists, and other data consumers within the business to manage the data warehouse table structure and optimize it for reporting. Constantly striving to improve software development process and team productivity Define and implement Data Governance processes related to data discovery, lineage, access control and quality assurance Perform code reviews and QA data imported by various processes Qualifications 3-5 years of experience. At least 2+ years of experience in data engineering and data infrastructure space on any of the big data technologies: Hive, Spark, Pyspark(Batch and Streaming), Airflow, Redshift and Delta Lake. Experience in product-based companies or startups. Strong understanding of data warehousing concepts and the data ecosystem. Strong Design/Architecture experience architecting, developing, and maintaining solutions in AWS. Experience building data pipelines and managing the pipelines after they’re deployed. Experience with building data pipelines from business applications using APIs. Previous experience in Databricks is a big plus. Understanding of Dev Ops would be preferable though not a must Working knowledge of BI Tools like Metabase, and Power BI is a plus Experience of architecting systems for data access is a major plus. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About Us We're building the world’s first AI Super-Assistant purpose-built for enterprises and professionals. Our platform is designed to supercharge productivity, automate workflows, and redefine the way teams work with AI. Our two core products: ChatLLM – Designed for professionals and small teams, offering conversational AI tailored for everyday productivity. Enterprise Platform – A robust, secure, and highly customizable platform for organizations seeking to integrate AI into every facet of their operations. We’re on a mission to redefine enterprise AI – and we’re looking for engineers ready to build the connective tissue between AI and the systems that power modern business. Role: Connector Integration Engineer – Databases & Warehouses As a Connector Integration Engineer focused on data infrastructure, you’ll lead the development and optimization of connectors to enterprise databases and cloud data warehouses. You’ll play a critical role in helping our AI systems securely query, retrieve, and transform large-scale structured data across multiple platforms. What You’ll Do Build and maintain connectors to data platforms such as: BigQuery Snowflake Redshift and other JDBC-compliant databases Work with APIs, SDKs, and data drivers to enable scalable data access Implement secure, token-based access flows using IAM roles and OAuth2 Collaborate with AI and product teams to define data extraction and usage models Optimize connectors for query performance, load handling, and schema compatibility Write well-documented, testable, and reusable backend code Monitor and troubleshoot connectivity and performance issues What We’re Looking For Proficiency in building connectors for Snowflake, BigQuery, and JDBC-based data systems Solid understanding of SQL, API integrations, and cloud data warehouse patterns Experience with IAM, KMS, and secure authentication protocols (OAuth2, JWT) Strong backend coding skills in Python, TypeScript, or similar Ability to analyze schemas, debug query issues, and support high-volume pipelines Familiarity with RESTful services, data transformation, and structured logging Comfortable working independently on a distributed team Nice to Have Experience with Redshift, Postgres, or Databricks Familiarity with enterprise compliance standards (SOC 2, ISO 27001) Previous work in data engineering, SaaS, or B2B analytics products Background in high-growth tech companies or top-tier universities encouraged What We Offer Remote-first work environment Opportunity to shape the future of AI in the enterprise Work with a world-class team of AI researchers and product builders Flat team structure with real impact on product and direction $60,000 USD annual salary Ready to connect enterprise data to cutting-edge AI workflows? Join us – and help power the world’s first AI Super-Assistant. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Spark Scala+AWS+SQL Developer (SA/M) A Spark Scala+AWS+SQL Developer is responsible for building and maintaining distributed data processing systems using Apache Spark and Scala, leveraging AWS cloud services for scalable and efficient data solutions. The role involves developing ETL/ELT pipelines, optimizing Spark jobs, and crafting complex SQL queries for data transformation and analysis. Collaboration with teams, ensuring data quality, and adhering to best coding practices are essential aspects of the role. Core skills include: ? Proficiency in Apache Spark and Scala programming. ? Expertise in SQL for database management and optimization. ? Experience with AWS services like S3, EMR, Glue, and Redshift. ? Knowledge of data warehousing, data lakes, and big data tools. The position suits those passionate about data engineering and looking to work in dynamic and cloud-based environments! Let me know if you'd like a detailed description or tips for preparing for such a role. Key Responsibilities: ? Data Pipeline Development: ? Cloud-based Solutions: ? Data Processing & Transformation: ? Performance Optimization: ? Collaboration & Communication: ? Data Quality & Security: ? Continuous Improvement: Skills and Knowledge: 1.Apache Spark: o Proficiency in creating distributed data processing pipelines. o Hands-on experience with Spark components like RDDs, DataFrames, Datasets, and Spark Streaming. 2.Scala Programming: o Expertise in Scala for developing Spark applications. o Knowledge of functional programming concepts. 3.AWS Services: o Familiarity with key AWS tools like S3, EMR, Glue, Lambda, Redshift, and Athena. o Ability to design, deploy, and manage cloud-based solutions. 4.SQL Expertise: o Ability to write complex SQL queries for data extraction, transformation, and reporting. o Experience in query optimization and database performance tuning. 5.Data Engineering: o Skills in building ETL/ELT pipelines for seamless data flow. o Understanding of data lakes, data warehousing, and data modeling. 6.Big Data Ecosystem: o Knowledge of Hadoop, Kafka, and other big data tools (optional but beneficial). 7.Version Control and CI/CD: o Proficiency in Git for version control. o Experience in continuous integration and deployment pipelines. 8.Performance Tuning: o Expertise in optimizing Spark jobs and SQL queries for efficiency. Soft Skills: ? Strong problem-solving abilities. ? Effective communication and collaboration skills. ? Attention to detail and adherence to coding best practices. Domain Knowledge: ? Familiarity with data governance and security protocols. ? Understanding of business intelligence and analytics requirements Skills Required RoleSpark Scala+AWS+SQL Developer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates-B.Tech Employment TypeFull Time, Permanent Key Skills APACHE SPARK SCALA PROGRAMMING SQL EXPERTISE AWS SERVICES ETL/ELT PIPELINES. Other Information Job CodeGO/JC/21445/2025 Recruiter NameSPriya Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

Job Summary: We are looking for a skilled and motivated Data Engineer to join our growing data team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support analytics, reporting, and machine learning initiatives. You will work closely with data analysts, data scientists, and software engineers to ensure reliable access to high-quality data across the organization. Key Responsibilities: Design, develop, and maintain robust and scalable data pipelines and ETL/ELT processes. Build and optimize data architectures to support data warehousing, batch processing, and real-time data streaming. Collaborate with data scientists, analysts, and other engineers to deliver high-impact data solutions. Ensure data quality, consistency, and security across all systems. Manage and monitor data workflows to ensure high availability and performance. Develop tools and frameworks to automate data ingestion, transformation, and validation. Participate in data modeling and architecture discussions for both transactional and analytical systems. Maintain documentation of data flows, architecture, and related processes. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field. Strong programming skills in Python, Java, or Scala. Proficient in SQL and experience working with relational databases (e.g., PostgreSQL, MySQL). Experience with big data tools and frameworks (e.g., Hadoop, Spark, Kafka). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and services like S3, Redshift, BigQuery, or Azure Data Lake. Hands-on experience with data pipeline orchestration tools (e.g., Airflow, Luigi). Experience with data warehousing and data modeling best practices. Preferred Qualifications: Experience with CI/CD for data pipelines. Knowledge of containerization and orchestration tools like Docker and Kubernetes. Experience with real-time data processing technologies (e.g., Apache Flink, Kinesis). Familiarity with data governance and security practices. Exposure to machine learning pipelines is a plus. Show more Show less

Posted 2 weeks ago

Apply

5.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Responsibilities / Qualifications Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Glue Data Catalog, Lake formation, Apache Airflow, Lambda, etc Experience with development of data governance framework including the management of data, operating model, data policies and standards. Experience with orchestration of workflows in an enterprise environment. Working experience with Agile Methodology Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. About The Team Come be a part of something big. If you want to be a part of building something big that will drive value throughout the entire global organization, then this is the opportunity for you. You will be working on top priority initiatives that span new and existing technologies - all to deliver outstanding results and experiences for our customers and employees. The Enterprise Data Services organization in Business Technology takes pride in enabling data driven business outcomes to spearhead Workday’s growth through trusted data excellence, innovation and architecture thought leadership. Our organization is responsible for developing and supporting Data Warehousing, Data Ingestion and Integration Services, Master Data Management (MDM), Data Quality Assurance, and the deployment of cutting-edge Advanced Analytics and Machine Learning solutions tailored to enhance multiple business sectors such as Sales, Marketing, Services, Support, and Customer Engagement. Our team harnesses the power of top-tier modern cloud platforms and services, including AWS, Databricks, Snowflake, Reltio, Tableau, Snaplogic, and MongoDB, complemented by a suite of AWS-native technologies like Spark, Airflow, Redshift, Sagemaker, and Kafka. These tools are pivotal in our drive to create robust data ecosystems that empower our business operations with precision and scalability. EDS is a global team distributed across the U.S, India and Canada. About The Role Join a pioneering organization at the forefront of technological advancement, dedicated to demonstrating data-driven insights to transform industries and drive innovation. We are actively seeking a skilled Data Platform and Support Engineer who will play a pivotal role in ensuring the smooth functioning of our data infrastructure, enabling self-service analytics, and empowering analytical teams across the organization. As a Data Platform and Support Engineer, you will oversee the management of our enterprise data hub, working alongside a team of dedicated data and software engineers to build and maintain a robust data ecosystem that drives decision-making at scale for internal analytical applications. You will play a key role in ensuring the availability, reliability, and performance of our data infrastructure and systems. You will be responsible for monitoring, maintaining, and optimizing data systems, providing technical support, and implementing proactive measures to enhance data quality and integrity. This role requires advanced technical expertise, problem-solving skills, and a strong commitment to delivering high-quality support services. The team is responsible for supporting Data Services, Data Warehouse, Analytics, Data Quality and Advanced Analytics/ML for multiple business functions including Sales, Marketing, Services, Support and Customer Experience. We demonstrate leading modern cloud platforms like AWS, Reltio, Snowflake,Tableau, Snaplogic, MongoDB in addition to the native AWS technologies like Spark, Airflow, Redshift, Sagemaker and Kafka. Job Responsibilities : Monitor the health and performance of data systems, including databases, data warehouses, and data lakes. Conduct root cause analysis and implement corrective actions to prevent recurrence of issues. Manage and optimize data infrastructure components such as servers, storage systems, and cloud services. Develop and implement data quality checks, validation rules, and data cleansing procedures. Implement security controls and compliance measures to protect sensitive data and ensure regulatory compliance. Design and implement data backup and recovery strategies to safeguard data against loss or corruption. Optimize the performance of data systems and processes by tuning queries, optimizing storage, and improving ETL pipeline efficiency. Maintain comprehensive documentation, runbooks, and fix guides for data systems and processes. Collaborate with multi-functional teams, including data engineers, data scientists, business analysts, and IT operations. Lead or participate in data-related projects, such as system migrations, upgrades, or expansions. Deliver training and mentorship to junior team members, sharing knowledge and standard methodologies to support their professional development. Participate in rotational shifts, including on-call rotations and coverage during weekends and holidays as required, to provide 24/7 support for data systems, responding to and resolving data-related incidents in a timely manner Hands-on experience with source version control, continuous integration and experience with release/organizational change delivery tools. About You Basic Qualifications: 6+ years of experience designing and building scalable and robust data pipelines to enable data-driven decisions for the business. BE/Masters in computer science or equivalent is required Other Qualifications: Prior experience with CRM systems (e.g. Salesforce) is desirable Experience building analytical solutions to Sales and Marketing teams. Should have experience working on Snowflake ,Fivetran DBT and Airflow Experience with very large-scale data warehouse and data engineering projects. Experience developing low latency data processing solutions like AWS Kinesis, Kafka, Spark Stream processing. Should be proficient in writing advanced SQLs, Expertise in performance tuning of SQLs Experience working with AWS data technologies like S3, EMR, Lambda, DynamoDB, Redshift etc. Solid experience in one or more programming languages for processing of large data sets, such as Python, Scala. Ability to create data models, STAR schemas for data consuming. Extensive experience in troubleshooting data issues, analyzing end to end data pipelines and working with users in resolving issues Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process! , Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Spark Scala+AWS+SQL Developer (SA/M) A Spark Scala+AWS+SQL Developer is responsible for building and maintaining distributed data processing systems using Apache Spark and Scala, leveraging AWS cloud services for scalable and efficient data solutions. The role involves developing ETL/ELT pipelines, optimizing Spark jobs, and crafting complex SQL queries for data transformation and analysis. Collaboration with teams, ensuring data quality, and adhering to best coding practices are essential aspects of the role. Core skills include: ? Proficiency in Apache Spark and Scala programming. ? Expertise in SQL for database management and optimization. ? Experience with AWS services like S3, EMR, Glue, and Redshift. ? Knowledge of data warehousing, data lakes, and big data tools. The position suits those passionate about data engineering and looking to work in dynamic and cloud-based environments! Let me know if you'd like a detailed description or tips for preparing for such a role. Key Responsibilities: ? Data Pipeline Development: ? Cloud-based Solutions: ? Data Processing & Transformation: ? Performance Optimization: ? Collaboration & Communication: ? Data Quality & Security: ? Continuous Improvement: Skills and Knowledge: 1.Apache Spark: o Proficiency in creating distributed data processing pipelines. o Hands-on experience with Spark components like RDDs, DataFrames, Datasets, and Spark Streaming. 2.Scala Programming: o Expertise in Scala for developing Spark applications. o Knowledge of functional programming concepts. 3.AWS Services: o Familiarity with key AWS tools like S3, EMR, Glue, Lambda, Redshift, and Athena. o Ability to design, deploy, and manage cloud-based solutions. 4.SQL Expertise: o Ability to write complex SQL queries for data extraction, transformation, and reporting. o Experience in query optimization and database performance tuning. 5.Data Engineering: o Skills in building ETL/ELT pipelines for seamless data flow. o Understanding of data lakes, data warehousing, and data modeling. 6.Big Data Ecosystem: o Knowledge of Hadoop, Kafka, and other big data tools (optional but beneficial). 7.Version Control and CI/CD: o Proficiency in Git for version control. o Experience in continuous integration and deployment pipelines. 8.Performance Tuning: o Expertise in optimizing Spark jobs and SQL queries for efficiency. Soft Skills: ? Strong problem-solving abilities. ? Effective communication and collaboration skills. ? Attention to detail and adherence to coding best practices. Domain Knowledge: ? Familiarity with data governance and security protocols. ? Understanding of business intelligence and analytics requirements Skills Required RoleSpark Scala+AWS+SQL Developer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates-B.Tech Employment TypeFull Time, Permanent Key Skills APACHE SPARK SCALA PROGRAMMING SQL EXPERTISE AWS SERVICES ETL/ELT PIPELINES. Other Information Job CodeGO/JC/21445/2025 Recruiter NameSPriya Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description We are looking for a Senior Data Engineer with strong hands-on experience in PySpark, AWS Cloud Services, and SQL. The ideal candidate should have a passion for working with large-scale data pipelines, modern cloud data architectures, and possess excellent problem-solving skills. Key Responsibilities: Design, develop, and optimize big data processing pipelines using PySpark. Build and maintain scalable data solutions on AWS (e.g., S3, Glue, Lambda, EMR, Redshift). Write efficient, complex SQL queries for data extraction, transformation, and reporting. Collaborate with data scientists, business analysts, and application teams to ensure seamless data flow. Implement best practices in data quality, security, and governance. Troubleshoot and resolve performance issues in Spark jobs and SQL queries. Document system architecture, data workflows, and operational procedures. Stay up to date with emerging technologies in data engineering and cloud. Technical Skills Required: Strong proficiency in PySpark (at least 3 years of hands-on development experience) Solid experience working on AWS services (such as S3, Glue, Lambda, EMR, Redshift, Athena) Advanced skills in SQL ? writing, optimizing, and troubleshooting queries Experience with version control tools like Git Knowledge of data modeling and schema design for structured and semi-structured data Familiarity with CI/CD pipelines and automation tools is a plus Qualifications: Bachelor?s or Master?s degree in Computer Science, Information Technology, or a related field. 8?12 years of total experience in data engineering or a related field. Minimum of 3 years of relevant experience in PySpark. Key Attributes: Strong analytical and problem-solving skills Ability to work independently and collaboratively across teams Excellent communication (written & verbal) and interpersonal skills Flexible and adaptable in a fast-paced environment Skills Required RoleSenior Data Engineer - PySpark + AWS + SQL Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates Employment TypeFull Time, Permanent Key Skills PYSPARK AWS SQL Other Information Job CodeGO/JC/21438/2025 Recruiter NameSPriya Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description We are looking for a Senior Data Engineer with strong hands-on experience in PySpark, AWS Cloud Services, and SQL. The ideal candidate should have a passion for working with large-scale data pipelines, modern cloud data architectures, and possess excellent problem-solving skills. Key Responsibilities: Design, develop, and optimize big data processing pipelines using PySpark. Build and maintain scalable data solutions on AWS (e.g., S3, Glue, Lambda, EMR, Redshift). Write efficient, complex SQL queries for data extraction, transformation, and reporting. Collaborate with data scientists, business analysts, and application teams to ensure seamless data flow. Implement best practices in data quality, security, and governance. Troubleshoot and resolve performance issues in Spark jobs and SQL queries. Document system architecture, data workflows, and operational procedures. Stay up to date with emerging technologies in data engineering and cloud. Technical Skills Required: Strong proficiency in PySpark (at least 3 years of hands-on development experience) Solid experience working on AWS services (such as S3, Glue, Lambda, EMR, Redshift, Athena) Advanced skills in SQL ? writing, optimizing, and troubleshooting queries Experience with version control tools like Git Knowledge of data modeling and schema design for structured and semi-structured data Familiarity with CI/CD pipelines and automation tools is a plus Qualifications: Bachelor?s or Master?s degree in Computer Science, Information Technology, or a related field. 8?12 years of total experience in data engineering or a related field. Minimum of 3 years of relevant experience in PySpark. Key Attributes: Strong analytical and problem-solving skills Ability to work independently and collaboratively across teams Excellent communication (written & verbal) and interpersonal skills Flexible and adaptable in a fast-paced environment Skills Required RoleSenior Data Engineer - PySpark + AWS + SQL Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates Employment TypeFull Time, Permanent Key Skills PYSPARK AWS SQL Other Information Job CodeGO/JC/21438/2025 Recruiter NameSPriya Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position Summary: We are looking for an experienced Backend Engineer with strong expertise in data engineering to join our team. In this role, you will be responsible for designing and developing scalable data delivery solutions within our AWS-based data warehouse ecosystem. Your work will support business intelligence initiatives by powering dashboards and analytics tools that provide key insights to support strategic decision-making — including enhancing market performance and optimizing songwriter partnerships. Key Responsibilities: Design, develop, and maintain end-to-end ETL workflows and data integration pipelines using AWS tools. Collaborate closely with product managers, software developers, and infrastructure teams to deliver high-quality backend data solutions. Develop and refine stored procedures to ensure efficient data retrieval and transformation. Implement API integrations for seamless data exchange between systems. Continuously identify opportunities to automate and improve backend processes. Participate actively in Agile/Scrum teams, contributing to sprint planning, reviews, and retrospectives. Apply industry best practices to ensure clean, reliable, and scalable data operations. Communicate effectively with both technical stakeholders and cross-functional teams. Rapidly learn and implement new tools and technologies to meet evolving business needs. Required Skills & Experience: 8–10 years of experience in backend or data engineering roles. Strong background in data architecture, including modeling, ingestion, and mining. Hands-on experience with AWS services, including: S3, Glue, Data Pipeline, DMS, RDS, Redshift, Lambda. Proficient in scripting and development using Python, Node.js, and SQL. Solid experience in data warehousing and big data environments. Familiarity with SQL Server and other relational database systems. Proven ability to work effectively in an Agile/Scrum environment. Strong problem-solving skills with a focus on delivering practical, scalable solutions. Nice to Have: AWS certifications (e.g., AWS Certified Data Engineer or Solutions Architect). Exposure to CI/CD practices and DevOps tools. Understanding of data visualization platforms such as Tableau or Power BI. Show more Show less

Posted 2 weeks ago

Apply

4.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role description Python and Should have strong grip on both the language Hands on with Fast API, Apache Spark, React JS, Node JS, Server-Side JS. 4-5 years Developing Software's using a wide range of Amazon Web Services and Cloud Technologies. Professional hands-on development experience and expert in JavaScript frameworks like Dojo, jQuery, Ember React, Angular or equivalent. Hands on with JavaScript Object based model and programming, ECMAScript 6, TypeScript 4 & NPM Expertise with Python and Python frameworks like FastAPI, Django, Flask, Celery, SQLAlchemy Hands on with S3, Lambda, AwsGlue, Fargate, SQS, SNS, Eventbridge Hands on with Containers, Docker, Kubernetes, ECS, EKS, EC2 Hand on with PostgreSQL, AWS Redshift, Oracle, NoSQL, Mongo Build Automation with GitHub Actions, Jenkins, Shell Script Show more Show less

Posted 2 weeks ago

Apply

6.0 - 11.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Summary We are looking for a Senior Analytics Engineer to drive data excellence and innovation in our organization. As a thought leader in data engineering and analytics principles , you will be responsible for designing, building, and optimizing our data infrastructure while ensuring cost efficiency, security, and scalability . You will play a crucial role in managing Databricks and AWS usage , ensuring budget adherence, and taking proactive measures to optimize costs. This role also requires expertise in ETL processes, large-scale data processing, analytics, and data-driven decision-making , along with strong analytical and leadership skills. Responsibilities Act as a thought leader in data engineering and analytics, driving best practices and standards. Oversee cost management of Databricks and AWS, ensuring resource usage stays within allocated budgets and taking corrective actions when necessary. Design, implement, and optimize ETL pipelines for incremental data loading, ensuring seamless data ingestion, transformation, and performance tuning. Lead migration activities, ensuring smooth transitions while maintaining data integrity and availability. Handle massive data loads efficiently, optimizing storage, compute usage, and query performance. Adhere to Git principles for version control, ensuring best practices for collaboration and deployment. Implement and manage DSR (Airflow) workflows to automate and schedule data pipelines efficiently. Ensure data security and compliance, especially when handling PII data, aligning with regulations like GDPR and HIPAA. Optimize query performance and data storage strategies to improve cost efficiency and speed of analytics. Collaborate with data analysts and business stakeholders to enhance analytics capabilities, enabling data-driven decision-making. Develop and maintain dashboards, reports, and analytical models to provide actionable insights for business and engineering teams. Required Skills & Qualifications Four-year or Graduate Degree in Computer Science, Information Systems, or any other related discipline or commensurate work experience or demonstrated competence. 6-11 years of experience in Data Engineering, Analytics, Big Data, or related domains. Strong expertise in Databricks, AWS (S3, EC2, Lambda, RDS, Redshift, Glue, etc.), and cost optimization strategies. Hands-on experience with ETL pipelines, incremental data loads, and large-scale data processing. Proven experience in analyzing large datasets, deriving insights, and optimizing data workflows. Strong knowledge of SQL, Python, PySpark, and other data engineering and analytics tools. Strong problem-solving, analytical, and leadership skills. Experience with BI tools like Tableau, Looker, or Power BI for data visualization and reporting. Preferred Certifications Certified Software Systems Engineer (CSSE) Certified Systems Engineering Professional (CSEP) Cross-Org Skills Effective Communication Results Orientation Learning Agility Digital Fluency Customer Centricity Impact & Scope Impacts function and leads and/or provides expertise to functional project teams and may participate in cross-functional initiatives. Complexity Works on complex problems where analysis of situations or data requires an in-depth evaluation of multiple factors. Disclaimer This job description describes the general nature and level of work performed in this role. It is not intended to be an exhaustive list of all duties, skills, responsibilities, knowledge, etc. These may be subject to change and additional functions may be assigned as needed by management. Show more Show less

Posted 2 weeks ago

Apply

8.0 - 18.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS!! TCS is Hiring for Data Architect Interview Mode: Virtual Required Experience: 8-18 years Work location: PAN INDIA Data Architect Technical Architect with experience in designing data platforms, experience in one of the major platforms such as snowflake, data bricks, Azure ML, AWS data platforms etc., Hands on Experience in ADF, HDInsight, Azure SQL, Pyspark, python, MS Fabric, data mesh Good to have - Spark SQL, Spark Streaming, Kafka Hands on exp in Databricks on AWS, Apache Spark, AWS S3 (Data Lake), AWS Glue, AWS Redshift / Athena Good To Have - AWS Lambda, Python, AWS CI/CD, Kafka MLflow, TensorFlow, or PyTorch, Airflow, CloudWatch If interested kindly send your updated CV and below mentioned details through E-mail: srishti.g2@tcs.com Name: E-mail ID: Contact Number: Highest qualification: Preferred Location: Highest qualification university: Current organization: Total, years of experience: Relevant years of experience: Any gap: Mention-No: of months/years (career/ education): If any then reason for gap: Is it rebegin: Previous organization name: Current CTC: Expected CTC: Notice Period: Have you worked with TCS before (Permanent / Contract ) : Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

Position: Senior Database Administrator Position Overview As a Senior Database Administrator (DBA) at Intelex Technologies, you will play a critical role in managing and optimizing our MS SQL Server, Oracle, and PostgreSQL database environments. You will be responsible for the design, implementation, performance tuning, high availability, and security of our database infrastructure across cloud and on-premises deployments. Working within the DevOps & DataOps team , you will collaborate with developers, cloud engineers, and SREs to ensure seamless database operations supporting our mission-critical applications. Responsibilities And Deliverables Database Administration & Architecture Design, implement, and optimize databases across MS SQL Server, Oracle, and PostgreSQL environments. Participate in architecture/design reviews, ensuring database structures align with application needs and performance goals. Define and maintain best practices for schema design, indexing strategies, and query optimization. Performance Tuning & Scalability Conduct proactive query tuning, execution plan analysis, and indexing strategies to optimize database performance. Monitor, troubleshoot, and resolve performance bottlenecks across MS SQL Server, Oracle, and PostgreSQL. Implement partitioning, replication, and caching to improve data access and efficiency. High Availability, Replication & Disaster Recovery Design and implement HA/DR solutions for all supported databases, including MS Clustering, Oracle Data Guard, PostgreSQL Streaming Replication, and Always On Availability Groups. Perform capacity planning and ensure proper backup and recovery strategies are in place. Automate and test failover and recovery processes to minimize downtime. Security & Compliance Implement role-based access control (RBAC), encryption, auditing, and compliance policies across all database environments. Ensure adherence to SOC 2, ISO 27001, GDPR, and HIPAA security standards. Collaborate with security teams to identify and mitigate vulnerabilities. DevOps, CI/CD, & Automation Integrate database changes into CI/CD pipelines, ensuring automated schema migrations and rollbacks. Use Terraform or other IaC tools for database provisioning and configuration management. Automate routine maintenance tasks, monitoring, and alerting using New Relic and PagerDuty or similar. Cloud & Data Technologies Manage cloud-based database solutions such as Azure SQL, Amazon RDS, Aurora, Oracle Cloud, and PostgreSQL on AWS/Azure. Work with NoSQL solutions like MongoDB when needed. Support data warehousing and analytics solutions (e.g., Snowflake, Redshift, SSAS). Incident Response & On-Call Support Provide on-call support for database-related production incidents on a rotational basis. Conduct root cause analysis and implement long-term fixes for database-related issues. Organizational Alignment This is a highly collaborative role requiring close interactions with: DevOps & SRE teams to improve database scalability and monitoring. Developers to ensure efficient database designs and optimize queries. Cloud & Security teams to maintain compliance and security best practices. Qualifications & Skills Required 8+ years of experience managing MS SQL Server, Oracle, and PostgreSQL in enterprise environments. Expertise in database performance tuning, query optimization, and execution plan analysis. Strong experience with replication, clustering, and high-availability configurations. Hands-on experience with cloud databases in AWS or Azure (RDS, Azure SQL, Oracle Cloud, etc.). Solid experience with backup strategies, disaster recovery planning, and failover testing. Proficiency in T-SQL, PL/SQL, and PostgreSQL SQL scripting. Experience automating database tasks using PowerShell, Python, or Bash. Preferred Experience with containerized database deployments like Docker, or K8s. Knowledge of Kafka, AMQP, or event-driven architectures for handling high-volume transactions. Familiarity with Oracle Data Guard, GoldenGate, PostgreSQL Logical Replication, and Always On Availability Groups. Experience working in DevOps/SRE environments with CI/CD for database deployments. Exposure to big data technologies and analytical platforms. Certifications such as Oracle DBA Certified Professional, Microsoft Certified: Azure Database Administrator Associate, or AWS Certified Database – Specialty. Education & Other Requirements Bachelor's or Master's degree in Computer Science, Data Engineering, or equivalent experience. This role requires a satisfactory Criminal Background Check and Public Safety Verification. Why Join Intelex Technologies? Work with cutting-edge database technologies in a fast-paced, DevOps-driven environment. Make an impact by supporting critical EHS applications that improve workplace safety. Flexible remote work options and opportunities for professional growth. Collaborate with top-tier cloud, DevOps, and security experts to drive innovation. Fortive Corporation Overview Fortive’s essential technology makes the world stronger, safer, and smarter. We accelerate transformation across a broad range of applications including environmental, health and safety compliance, industrial condition monitoring, next-generation product design, and healthcare safety solutions. We are a global industrial technology innovator with a startup spirit. Our forward-looking companies lead the way in software-powered workflow solutions, data-driven intelligence, AI-powered automation, and other disruptive technologies. We’re a force for progress, working alongside our customers and partners to solve challenges on a global scale, from workplace safety in the most demanding conditions to groundbreaking sustainability solutions. We are a diverse team 17,000 strong, united by a dynamic, inclusive culture and energized by limitless learning and growth. We use the proven Fortive Business System (FBS) to accelerate our positive impact. At Fortive, we believe in you. We believe in your potential—your ability to learn, grow, and make a difference. At Fortive, we believe in us. We believe in the power of people working together to solve problems no one could solve alone. At Fortive, we believe in growth. We’re honest about what’s working and what isn’t, and we never stop improving and innovating. Fortive: For you, for us, for growth. About Intelex Since 1992, Intelex Technologies, ULC. is a global leader in the development and support of software solutions for Environment, Health, Safety and Quality (EHSQ) programs. Our scalable, web-based software provides clients with unprecedented flexibility in managing, tracking and reporting on essential corporate information. Intelex software easily integrates with common ERP systems like SAP and PeopleSoft creating a seamless solution for enterprise-wide information management. Intelex’s friendly, knowledgeable staff ensures our almost 1400 clients and over 3.5 million users from companies across the globe get the most out of our groundbreaking, user-friendly software solutions. Visit www.intelex.com to learn more. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. Since 1992, Intelex Technologies, ULC. is a global leader in the development and support of software solutions for Environment, Health, Safety and Quality (EHSQ) programs. Our scalable, web-based software provides clients with unprecedented flexibility in managing, tracking and reporting on essential corporate information. Intelex software easily integrates with common ERP systems like SAP and PeopleSoft creating a seamless solution for enterprise-wide information management. Intelex’s friendly, knowledgeable staff ensures our almost 1400 clients and over 3.5 million users from companies across the globe get the most out of our groundbreaking, user-friendly software solutions. Visit www.intelex.com to learn more. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

Role - Python Data Analyst Location - Ahmedabad, Noida, Pune, Bangalore, Hyderabad Type - Permanent Work Mode - Hybrid (2 days office, 3 days remote) Job Description: We are looking for a highly motivated Data Analyst with strong expertise in AWS cloud services and Python to join our analytics team. The ideal candidate will be responsible for extracting, transforming, and analyzing data to generate actionable business insights. You will work closely with data engineers, business stakeholders, and product teams to support data-driven decision-making. Responsibilities Analyze and interpret complex datasets to identify trends, patterns, and insights. Design and implement data pipelines and workflows using AWS services such as S3, Glue, Lambda, Athena, Redshift, and CloudWatch. Write efficient and reusable Python scripts for data wrangling, automation, and analytics. Collaborate with business stakeholders to gather requirements and develop analytical solutions. Ensure data accuracy, consistency, and quality through regular validation and monitoring. Document processes, analysis findings, and data workflows for transparency and future reference. Requirements Bachelor’s degree in computer science, Statistics, Mathematics, Engineering, or a related field. 4+ years of hands-on experience in data analysis or a related field. Strong proficiency in Python for data processing, analysis, and automation. Solid experience working with AWS cloud services, especially S3, Glue, Lambda, Redshift, and Athena. Proficiency in writing SQL queries for large datasets. Strong understanding of data structures, ETL pipelines, and data governance. Good communication and problem-solving skills Preferred Skills : Experience with Pandas, NumPy, Boto3, PySpark, or other Python libraries for data analysis. Familiarity with version control systems (Git) and CI/CD pipelines. Exposure to machine learning concepts or data science is a plus. Knowledge of data visualization tools like Tableau, Amazon QuickSight, or Power BI Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance Show more Show less

Posted 2 weeks ago

Apply

4.0 - 8.0 years

5 - 9 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Whats in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 12 months, or freelancing Be a part of an Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Highly likely) Onsite on client location Deccan AIs Office: Hyderabad or Bangalore Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities What are the next steps? Register on our Soul AI website

Posted 2 weeks ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies