Jobs
Interviews

3678 Redshift Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Key Responsibilities: Machine Learning Solution Development: Design, develop and deploy ML models, algorithms and agentic AI systems to address complex business challenges across a range of sectors. Cloud & MLOps Management: Lead the implementation of ML solutions on AWS cloud (with heavy use of Amazon SageMaker and related AWS services). Develop and maintain end-to-end CI/CD pipelines for ML projects, using infrastructure-as-code tools like AWS CloudFormation and Terraform to automate model deployment and system setup. Project Leadership: Oversee the ML lifecycle from data preparation to model training, validation, and deployment. Make high-level design decisions on model architecture and data pipelines. Mentor junior engineers and collaborate with data scientists, ML engineers, and Software Engineering teams to ensure successful delivery of ML projects. Client & Stakeholder Collaboration: Collaborate with project managers and stakeholders across a range of sectors to gather requirements and translate business needs into technical solutions. Present findings and ML model results to non-technical audiences in a clear manner, and refine solutions based on their feedback. Quality, Security & Compliance: Ensure that ML solutions meet quality and performance standards. Implement monitoring and logging for models in production, and proactively improve model accuracy and efficiency. Given the sensitive nature of our data, enforce data security best practices and compliance with relevant regulations (e.g. data privacy and confidentiality) in all ML workflows. Required Qualifications & Experience: Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or related field. Strong foundation in statistics and algorithms is expected. Experience: 5+ years of hands-on experience in machine learning or data science roles, with a track record of building and deploying ML models into production. Prior experience leading projects or teams is a plus for a lead role. Programming & ML Skills: Advanced programming skills in Python (including libraries such as pandas, scikit-learn, TensorFlow/PyTorch). Solid understanding of ML algorithms, model evaluation techniques, and optimisation. Experience with NLP techniques, generative AI or financial data modelling is advantageous. Cloud & DevOps: Proven experience with AWS cloud services relevant to data science – particularly Amazon SageMaker for model development and deployment. Familiarity with data storage and processing on AWS (S3, AWS Lambda, Athena/Redshift, etc.) is expected. Strong knowledge of DevOps/MLOps practices – candidates should have built or worked with CI/CD pipelines for ML, using tools like Docker and Jenkins, and infrastructure-as-code tools like CloudFormation or Terraform to automate deployments. Hybrid Work Skills: Ability to thrive in a hybrid work environment – should be self-motivated and communicative when working remotely, and effective at in-person collaboration during on-site days. (The role will be based in Chennai with a mix of remote and office work.) Soft Skills: Excellent problem-solving and analytical thinking. Strong communication skills to explain complex ML concepts to clients or management. Ability to work under tight deadlines and multitask across projects for different clients. A client-focused mindset is essential, as the role involves understanding and addressing the needs of large clients who come to us because they trust us.

Posted 1 week ago

Apply

4.0 years

15 - 25 Lacs

Bengaluru, Karnataka, India

On-site

Key Responsibilities Partner with product managers, engineers, and business stakeholders to define KPIs and success metrics for Creator Success Create comprehensive dashboards and self-service analytics tools using QuickSight, Tableau, or similar BI platforms Design, build, and maintain robust ETL/ELT pipelines to process large volumes of streaming and batch data from Creator Success platform Develop and optimize data warehouses, data lakes, and real-time analytics systems using AWS services (Redshift, S3, Kinesis, EMR, Glue) Build automated data validation and alerting mechanisms for critical business metrics Generate actionable insights from complex datasets to drive product roadmap and business strategy Required Qualifications Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field 4+ years of experience in business intelligence/analytic roles with proficiency in SQL, Python, and/or Scala Strong experience with AWS cloud services (Redshift, S3, EMR, Glue, Lambda, Kinesis) Expertise in building and optimizing ETL pipelines and data warehousing solutions Proficiency with big data technologies (Spark, Hadoop) and distributed computing frameworks Experience with business intelligence tools (QuickSight, Tableau, Looker) and data visualization best practices High proficiency in SQL and Python Expertise in building and optimizing ETL pipelines and data warehousing solutions Experience with business intelligence tools (QuickSight, Tableau, Looker) and data visualization best practices Experience with AWS cloud services (Redshift, S3, EMR) Skills: spark,python,aws,looker,scala,hadoop,sql,power bi,aws s3,business intelligence,aws emr,aws glue,tableau,aws kinesis,quicksight,aws redshift

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gujarat, India

On-site

Job Summary: We are looking for a highly skilled and self-motivated Technical Lead - Data, Cloud & AI Lead - to design, develop, and optimize data pipelines and infrastructure that power AI/ML solutions in the cloud. The ideal candidate will have deep experience in data engineering, strong exposure to cloud platforms, and familiarity with machine learning workflows. FinOps and related experience is preferred. This role will play a critical part in enabling data-driven innovation and scaling intelligent applications across the organization . Required Skills & Experience: 6+ years of experience in data engineering with a strong understanding of data architecture Hands-on experience with cloud platforms: AWS (Glue, S3, Redshift) or GCP (BigQuery, Dataflow) Strong programming skills in Python, Java, SQL; knowledge of Spark or Scala is a plus Experience with ETL/ELT tools and orchestration frameworks like Apache Airflow, DBT, or Prefect Familiarity with machine learning workflows, model lifecycle, and MLOps practices Proficient in working with both batch and streaming data (Kafka, Kinesis, Pub/Sub) Experience with containerization and deployment (Docker, Kubernetes is a plus) Good understanding of data security and access control in cloud environments

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

KEY OBJECTIVE OF THE JOB Own, execute and drive the CRM campaigns (Push Notifications, Email, SMS, in-app & Browser Notifications, WhatsApp) to drive channel revenue and visits. KEY DELIVERABLES: Creation, testing and delivery of campaigns for Push and Browser Notifications, email, SMS & other own media channels. CRM Channel planning for Push Notifications, Email, SMS, Browser Notifications Identifying and driving improvement projects for CTR, and campaign efficiency Co-ordination with creative team to get copy and creatives done as per schedule Create automated campaigns by building workflows, data rules and the data creation on redshift, creating schemas and workflows on Campaign Management Platform. Build workflows to create and maintain reports for campaign performance. DESIRABLE SKILLS: Essential Attributes Teamwork, Communication and Interpersonal Skills, Analytical Skills, Dependability and a Strong Work Ethic, Adaptability and Flexibility Data handling on Excel and preferably on Redshift Experience in category/marketing planning and execution Knowledge of CRM tools such as CleverTap, WebEngage, MoEngage, Netcore, AppsFlyer, HubSpot, or similar platforms. Desired Attributes Understanding about Email Marketing, Push Notifications and other CRM channels Basic understanding about segmentation & marketing

Posted 1 week ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us: Airtel Payments Bank, India's first payments bank is a completely digital and paperless bank. The bank aims to take basic banking services to the doorstep of every Indian by leveraging Airtel's vast retail network in a quick and efficient manner. At Airtel Payments Bank, we’re transforming the way banking operates in the country. Our core business is banking and we’ve set out to serve each unbanked and underserved Indian. Our products and technology aim to take basic banking services to the doorstep of every Indian. We are a fun-loving, energetic and fast growing company that breathes innovation. We encourage our people to push boundaries and evolve from skilled professionals of today to risk-taking entrepreneurs of tomorrow. We hire people from every realm and offer them opportunities that encourage individual and professional growth. We are always looking for people who are thinkers & doers; people with passion, curiosity & conviction; people who are eager to break away from conventional roles and do 'jobs never done before’. About the Team: We are a team of engineers and problem-solvers building scalable, modern data infrastructure. Our mission is to power intelligent decision-making through clean, reliable, and real-time data pipelines using technologies like Kafka, PySpark, Hadoop, Airflow, and AWS. If you love working with data at scale, building cloud-native solutions, and improving pipeline reliability, you'll thrive here. Key Responsibilities: •Design, build, and maintain scalable ETL/ELT pipelines using Python, PySpark, and SQL. •Develop real-time data ingestion and streaming solutions using Apache Kafka and AWS Kinesis. •Leverage the Hadoop ecosystem and AWS EMR for distributed data processing. •Automate and orchestrate workflows using Apache Airflow (deployed via MWAA on AWS). •Build and expose data services and APIs using Flask, deployed via Nginx. •Implement centralized logging and monitoring with Filebeat and the ELK/Opensearch stack. •Work extensively with AWS services including S3, API Gateway, OpenSearch, Kinesis, and EMR. •Collaborate with data scientists, analysts, and platform engineers to ensure high-quality, accessible data. Must-Have Skills: •4–7 years of experience in data engineering. •Proficiency in Python and SQL (Experience with Oracle or similar RDBMS). •Good experience with Apache Kafka (producer/consumer architecture, stream processing concepts). •Hands-on with PySpark, Hadoop & Hive for big data processing. •Workflow orchestration using Apache Airflow (and optionally MWAA). •Building APIs with Flask and serving via Nginx. •Logging and observability using ELK Stack and Filebeat. •Good familiarity with AWS ecosystem: EMR, Kinesis, S3, OpenSearch, API Gateway, MWAA. Good-to-Have (But not mandatory) •Experience with AWS Glue, Athena, and Redshift for serverless data processing and warehousing. •Familiarity with AWS Flink or other stream processing frameworks. •Exposure to AWS DMS (Data Migration Service) for database migrations and replication tasks. •Knowledge of AWS QuickSight for dashboarding and BI reporting. •Understanding of data lake architectures and event-driven processing on AWS Why Join Us? Airtel Payments Bank is transforming from a digital-first bank to one of the largest Fintech company. There could not be a better time to join us and be a part of this incredible journey than now. We at Airtel payments bank don’t believe in all work and no play philosophy. For us, innovation is a way of life and we are a happy bunch of people who have built together an ecosystem that drives financial inclusion in the country by serving 300 million financially unbanked, underbanked, and underserved population of India. Some defining characteristics of life at Airtel Payments Bank are Responsibility, Agility, Collaboration and Entrepreneurial development : these also reflect in our core values that we fondly call RACE..

Posted 1 week ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Role - AWS Data engineer Years of Exp- 5+ Years Work Mode- Chennai (WFO) Interview: Face to Face on 26 July (Saturday) Shift Time- supporting IST hours Preferred Domain: Life Sciences /Pharma (Good to have) Mandate- Key skillset: AWS, Python, Databricks, Pyspark, SQL Roles and Responsibilities: 1. Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. 2. Data Integration and Transformation: Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). 3. Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. 4. Automation and Workflow Management: Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. 5. Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. 6.Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. 7. Migration and Upgrades: Lead migrations from legacy data systems to modern cloud-based platforms, ensuring smooth transitions and enhanced scalability. 8. Cost Optimization: Implement strategies for reducing cloud infrastructure costs, such as optimizing resource usage, setting up lifecycle policies, and automating cost alerts. 9. Data Security and Compliance: Ensure secure access to data by implementing IAM roles and policies, adhering to data security best practices, and enforcing compliance with organizational standards. 10. Collaboration and Support: Work closely with data scientists, analysts, and business teams to understand data requirements and provide support for data-related tasks

Posted 1 week ago

Apply

3.0 years

0 Lacs

Thiruvananthapuram Taluk, India

On-site

Position- Data Engineer Experience- 3+ years Location : Trivandrum, Hybrid Salary : Upto 8 LPA Job Summary: We are seeking a highly motivated and skilled Data Engineer with 3+ years of experience to join our growing data team. In this role, you will be instrumental in designing, building, and maintaining robust, scalable, and efficient data pipelines and infrastructure. You will work closely with data scientists, analysts, and other engineering teams to ensure data availability, quality, and accessibility for various analytical and machine learning initiatives. Key Responsibilities: ● Design and Development: ○ Design, develop, and optimize scalable ETL/ELT pipelines to ingest, transform, and load data from diverse sources into data warehouses/lakes. ○ Implement data models and schemas that support analytical and reporting requirements. ○ Build and maintain robust data APIs for data consumption by various applications and services. ● Data Infrastructure: ○ Contribute to the architecture and evolution of our data platform, leveraging cloud services (AWS, Azure, GCP) or on-premise solutions. ○ Ensure data security, privacy, and compliance with relevant regulations. ○ Monitor data pipelines for performance, reliability, and data quality, implementing alerting and anomaly detection. ● Collaboration & Optimization: ○ Collaborate with data scientists, business analysts, and product managers to understand data requirements and translate them into technical solutions. ○ Optimize existing data processes for efficiency, cost-effectiveness, and performance. ○ Participate in code reviews, contribute to documentation, and uphold best practices in data engineering. ● Troubleshooting & Support: ○ Diagnose and resolve data-related issues, ensuring minimal disruption to data consumers. ○ Provide support and expertise to teams consuming data from the data platform. Required Qualifications: ● Bachelor's degree in Computer Science, Engineering, or a related quantitative field. ● 3+ years of hands-on experience as a Data Engineer or in a similar role. ● Strong proficiency in at least one programming language commonly used for data engineering (e.g., Python, Java, Scala). ● Extensive experience with SQL and relational databases (e.g., PostgreSQL, MySQL, SQL Server). ● Proven experience with ETL/ELT tools and concepts. ● Experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Data Bricks). ● Familiarity with cloud platforms (AWS, Azure, or GCP) and their data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Blob Storage, BigQuery, Dataflow). ● Understanding of data modeling techniques (e.g., dimensional modeling, Kimball, Inmon). ● Experience with version control systems (e.g., Git). ● Excellent problem-solving, analytical, and communication skills. Preferred Qualifications: ● Master's degree in a relevant field. ● Experience with Apache Spark (PySpark, Scala Spark) or other big data processing frameworks. ● Familiarity with NoSQL databases (e.g., MongoDB, Cassandra). ● Experience with data streaming technologies (e.g., Kafka, Kinesis). ● Knowledge of containerization technologies (e.g., Docker, Kubernetes). ● Experience with workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Step Functions). ● Understanding of DevOps principles as applied to data pipelines. ● Prior experience in Telecom is a plus.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Job Information Date Opened 07/23/2025 Job Type Permanent RSD NO 10371 Industry IT Services Min Experience 15+ Max Experience 15+ City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600018 Job Description Job Summary: We are seeking a Data Architect to design and implement scalable, secure, and efficient data solutions that support Convey Health Solutions' business objectives. This role will focus on data modeling, cloud data platforms, ETL processes, and analytics solutions, ensuring compliance with healthcare regulations (HIPAA, CMS guidelines). The ideal candidate will collaborate with data engineers, BI analysts, and business stakeholders to drive data-driven decision-making. Key Responsibilities: Enterprise Data Architecture: Design and maintain the overall data architecture to support Convey Health Solutions’ data-driven initiatives. Cloud & Data Warehousing: Architect cloud-based data solutions (AWS, Azure, Snowflake, BigQuery) to optimize scalability, security, and performance. Data Modeling: Develop logical and physical data models for structured and unstructured data, supporting analytics, reporting, and operational processes. ETL & Data Integration: Define strategies for data ingestion, transformation, and integration, leveraging ETL tools like INFORMATICA, TALEND, DBT, or Apache Airflow. Data Governance & Compliance: Ensure data quality, security, and compliance with HIPAA, CMS, and SOC 2 standards. Performance Optimization: Optimize database performance, indexing strategies, and query performance for real-time analytics. Collaboration: Partner with data engineers, software developers, and business teams to align data architecture with business objectives. Technology Innovation: Stay up to date with emerging data technologies, AI/ML applications, and industry trends in healthcare data analytics. Required Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Engineering, or a related field. Experience: 7+ years of experience in data architecture, data engineering, or related roles. Technical Skills: Strong expertise in SQL, NoSQL, and data modeling techniques Hands-on experience with cloud data platforms (AWS Redshift, Snowflake, Google BigQuery, Azure Synapse) Experience with ETL frameworks (INFORMATICA, TALEND, DBT, Apache Airflow, etc.) Knowledge of big data technologies (Spark, Hadoop, Data-bricks) Strong understanding of data security and compliance (HIPAA, CMS, SOC 2, GDPR) Soft Skills: Strong analytical, problem-solving, and communication skills. Ability to work in a collaborative, agile environment. Preferred Qualifications: Experience in healthcare data management, claims processing, risk adjustment, or pharmacy benefit management (PBM). Familiarity with AI/ML applications in healthcare analytics. Certifications in cloud data platforms (AWS Certified Data Analytics, Google Professional Data Engineer, etc.). At Indium diversity, equity, and inclusion (DEI) are the cornerstones of our values. We champion DEI through a dedicated council, expert sessions, and tailored training programs, ensuring an inclusive workplace for all. Our initiatives, including the WE@IN women empowerment program and our DEI calendar, foster a culture of respect and belonging. Recognized with the Human Capital Award, we are committed to creating an environment where every individual thrives. Join us in building a workplace that values diversity and drives innovation.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Location: Bangalore - Karnataka, India - EOIZ Industrial Area Job Family: Artificial Intelligence & Machine Learning Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T4(A) Job ID: R-46721-2025 Description & Requirements Introduction: A Career at HARMAN Technology Services (HTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN HTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs Work at the convergence of cross channel UX, cloud, insightful data, IoT and mobility Empower companies to create new digital business models, enter new markets, and improve customer experiences Role : Data Architect with Microsoft Azure + Fabric + Purview Skill Experience Required: 10+ Years Key Responsibilities of the role include: Data Engineer Develop and implement data engineering project including data lakehouse or Big data platform Knowledge of Azure Purview is must Knowledge of Azure Data fabric Ability to define reference data architecture Cloud native data platform experience in Microsoft Data stack including – Azure data factory, Databricks on Azure Knowledge about latest data trends including datafabric and data mesh Robust knowledge of ETL and data transformation and data standardization approaches Key contributor on growth of the COE and influencing client revenues through Data and analytics solutions Lead the selection, deployment, and management of Data tools, platforms, and infrastructure. Ability to guide technically a team of data engineers Oversee the design, development, and deployment of Data solutions Define, differentiate & strategize new Data services/offerings and create reference architecture assets Drive partnerships with vendors on collaboration, capability building, go to market strategies, etc. Guide and inspire the organization about the business potential and opportunities around Data Network with domain experts Collaborate with client teams to understand their business challenges and needs. Develop and propose Data solutions tailored to client specific requirements. Influence client revenues through innovative solutions and thought leadership. Lead client engagements from project initiation to deployment. Build and maintain strong relationships with key clients and stakeholders. Build re-usable Methodologies, Pipelines & Models Create data pipelines for more efficient and repeatable data science projects Design and implement data architecture solutions that support business requirements and meet organizational needs Collaborate with stakeholders to identify data requirements and develop data models and data flow diagrams Work with cross-functional teams to ensure that data is integrated, transformed, and loaded effectively across different platforms and systems Develop and implement data governance policies and procedures to ensure that data is managed securely and efficiently Develop and maintain a deep understanding of data platforms, technologies, and tools, and evaluate new technologies and solutions to improve data management processes Ensure compliance with regulatory and industry standards for data management and security. Develop and maintain data models, data warehouses, data lakes and data marts to support data analysis and reporting. Ensure data quality, accuracy, and consistency across all data sources. Knowledge of ETL and data integration tools such as Informatica, Qlik Talend, and Apache NiFi. Experience with data modeling and design tools such as ERwin, PowerDesigner, or ER/Studio Knowledge of data governance, data quality, and data security best practices Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform. Familiarity with programming languages such as Python, Java, or Scala. Experience with data visualization tools such as Tableau, Power BI, or QlikView. Understanding of analytics and machine learning concepts and tools. Knowledge of project management methodologies and tools to manage and deliver complex data projects. Skilled in using relational database technologies such as MySQL, PostgreSQL, and Oracle, as well as NoSQL databases such as MongoDB and Cassandra. Strong expertise in cloud-based databases such as Azure datalake , Synapse, Azure data factory and AWS glue , AWS Redshift and Azure SQL. Knowledge of big data technologies such as Hadoop, Spark, snowflake, databricks , and Kafka to process and analyze large volumes of data. Proficient in data integration techniques to combine data from various sources into a centralized location. Strong data modeling, data warehousing, and data integration skills. People & Interpersonal Skills Build and manage a high-performing team of Data engineers and other specialists. Foster a culture of innovation and collaboration within the Data team and across the organization. Demonstrate the ability to work in diverse, cross-functional teams in a dynamic business environment. Candidates should be confident, energetic self-starters, with strong communication skills. Candidates should exhibit superior presentation skills and the ability to present compelling solutions which guide and inspire. Provide technical guidance and mentorship to the Data team Collaborate with other stakeholders across the company to align the vision and goals Communicate and present the Data capabilities and achievements to clients and partners Stay updated on the latest trends and developments in the Data domain What is required for the role? 10+ years of experience in the information technology industry with strong focus on Data engineering, architecture and preferably as Azure data engineering lead 8+ years of data engineering or data architecture experience in successfully launching, planning, and executing advanced data projects. Data Governance experience is mandatory MS Fabric Certified Experience in working on RFP/ proposals, presales activities, business development and overlooking delivery of Data projects is highly desired Educational Qualification: A master’s or bachelor’s degree in computer science, data science, information systems, operations research, statistics, applied mathematics, economics, engineering, or physics. Candidate should have demonstrated the ability to manage data projects and diverse teams. Should have experience in creating data and analytics solutions. Experience in building solutions with Data solutions in any one or more domains – Industrial, Healthcare, Retail, Communication Problem-solving, communication, and collaboration skills. Good knowledge of data visualization and reporting tools Ability to normalize and standardize data as per Key KPIs and Metrics Benefits: Opportunities for professional growth and development. Collaborative and supportive work environment. What We Offer Access to employee discounts on world class HARMAN/Samsung products (JBL, Harman Kardon, AKG etc.) Professional development opportunities through HARMAN University’s business and leadership academies. An inclusive and diverse work environment that fosters and encourages professional and personal development. You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today! You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today! Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity employer. HARMAN strives to hire the best qualified candidates and is committed to building a workforce representative of the diverse marketplaces and communities of our global colleagues and customers. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.HARMAN attracts, hires, and develops employees based on merit, qualifications and job-related performance.(www.harman.com)

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

Thiruvananthapuram, Kerala

Remote

About the Company Armada is an edge computing startup that provides computing infrastructure to remote areas where connectivity and cloud infrastructure is limited, as well as areas where data needs to be processed locally for real-time analytics and AI at the edge. We’re looking to bring on the most brilliant minds to help further our mission of bridging the digital divide with advanced technology infrastructure that can be rapidly deployed anywhere . About the role We are looking for a detail-oriented and technically skilled BI Engineer to design, build, and maintain robust data pipelines and visualization tools that empower data-driven decision-making across the organization. The ideal candidate will work closely with stakeholders to translate business needs into actionable insights by developing and optimizing BI solutions. Location. This role is office-based at our Trivandrum, Kerala office. What You'll Do (Key Responsibilities) Design, develop, and maintain scalable ETL (Extract, Transform, Load) pipelines to support data integration from multiple sources. Build and optimize data models and data warehouses for business reporting and analysis. Develop dashboards, reports, and data visualizations using BI tools (e.g., Power BI, Tableau, Looker, etc.). Collaborate with data analysts, data scientists, and business stakeholders to understand reporting needs and deliver effective solutions. Ensure data accuracy, consistency, and integrity across reporting systems. Perform data validation, cleansing, and transformation as necessary. Identify opportunities to automate processes and improve reporting efficiency. Monitor BI tools and infrastructure performance, and troubleshoot issues as needed. Stay up-to-date with emerging BI technologies and best practices. Required Qualifications Bachelor’s degree in Computer Science, Information Systems, Data Science, or a related field. 2–4 years of experience as a BI Engineer, Data Engineer, or similar role. Proficiency in SQL and experience with data modeling and data warehousing (e.g., Snowflake, Redshift, BigQuery). Experience with BI and data visualization tools (e.g., Power BI, Tableau, Qlik, Looker). Strong understanding of ETL processes and data pipeline design. Excellent problem-solving skills and attention to detail. Preferred: Experience with Python, R, or other scripting languages for data manipulation. Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud Platform). Knowledge of version control (e.g., Git) and CI/CD practices. Experience with APIs, data governance, and data cataloging tools. Compensation We offer a competitive base salary along with equity options, providing an opportunity to share in the success and growth of Armada. #LI-JV1 #LI-Onsite You're a Great Fit if You're A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you Equal Opportunity Statement At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 2+ years of processing data with a massively parallel technology (such as Redshift, Teradata, Netezza, Spark or Hadoop based big data solution) experience - 2+ years of relational database technology (such as Redshift, Oracle, MySQL or MS SQL) experience - 2+ years of developing and operating large-scale data structures for business intelligence analytics (using ETL/ELT processes) experience - 5+ years of data engineering experience - Experience managing a data or BI team - Experience communicating to senior management and customers verbally and in writing - Experience leading and influencing the data or BI strategy of your team or organization - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS As a Data Engineering Manager, you will lead a team of data engineers, front end engineers and business intelligence engineers. You will own our internal data products (Yoda), transform to AI, build agents and scale them for IN and emerging stores. You will provide technical leadership, drive application and data engineering initiatives and build end-to-end data solutions that are highly available, scalable, stable, secure, and cost-effective. You strive for simplicity, demonstrate creativity with sound judgement. You deliver data & reporting solutions that are customer focused, easy to consume and create business impact. You are passionate about working with huge datasets and have experience with the organization and curation of data for analytics. You have a strategic and long-term view on architecture of advanced data eco systems. You are experienced in building efficient and scalable data services and have the ability to integrate data systems with AWS tools and services to support a variety of customer use cases/applications Key job responsibilities • Lead a team of data engineers, front end engineers and business intelligence engineers to deliver cross-functional, data and application engineering projects for Databases, Analytics and AI/ML services, • Establish and clearly communicate organizational vision, goals and success measures, • Collaborate with business stakeholders to develop roadmap and product requirements, • Build, Own, Prioritize, Lead and Deliver a roadmap of large and complex multi-functional projects and programs, • Manage AWS infrastructure, IMR cost and RDS/Dynamo instances • Interface with other technology teams to extract, transform, and load data from a wide variety of data sources, • Own the design, development, and maintenance of metrics, reports, dashboards, etc. to drive key business decisions. About the team CoBRA is the Central BI Reporting and Analytics org for IN stores and AI partner for International emerging stores . CoBRA team's mission is to empower Category and Seller orgs including Brand, Account, marketing and product/program teams with self-service products using AI (Yoda and bedrock agents), build actionable insights (Quicksight Q, Cutstom agents, Q- business) and help them make faster and smart decisions using science solutions across Amazon fly wheel on all inputs (Selection, Pricing and Speed). Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with AWS Tools and Technologies (Redshift, S3, EC2) Knowledge of building AI tools, AWS bedrock agents, LLM/foundational models Experience in supporting ML models for data needs Exposure to prompt engineering and upcoming AI technologies and its landscape Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

0 years

35 - 70 Lacs

Delhi, India

On-site

Skills: RDBMS, Solution Architecture, Customer Facing Roles, Presales, Cloud, Databases, Job Role We are looking for Solution Architects for designing data management solutions, having strong knowledge of architecting and designing highly available and scalable database on cloud He will deliver hands-on, business-oriented strategic and technical consulting to requirements towards cloud native and marketplace data / database management architecture and solutions Key Responsibilities Designing PaaS and IaaS database technology (RDBMS, NoSQL, Distributed database) Designing cloud infrastructure services (Compute, Storage, Network etc) for DB deployment Design Database Authentication and Authorization (IAM, RBAC) solution Capacity planning, performance analysis and database optimization to manage DB workload Analysing and Identifying infrastructure requirements for on premise, and on other cloud environments like Azure, Google, AWS Designing High Availability and Disaster Recovery solution for Database deployment on IaaS and PaaS platform Designing database Backup and Recovery solution using native or enterprise backup solution Designing database / data management and optimization job / task automation Designing Homogeneous and Heterogeneous database migration solution within On-Premise or On-Premise to Cloud (IaaS and PaaS) Designing database monitoring, alert notification/reporting, data masking/encryption solutions Designing ETL / ELT solution for data ingestion and data transformation Mentor implementation teams, handhold when needed on best practices and make sure the solution is implemented in right way Prepare high-level and low-level design document as required for implementation team Databases Technology and DB Services: Azure SQL, Azure SQL MI, PostgreSQL, MySQL, Oracle, SQL Server, AWS RDS, Amazon Aurora, Cloud SQL, Cloud Spanner, Cosmos DB, Azure Synapse Analytics / Google BigQuery / Amazon Redshift Educational Qualifications : Bachelor's degree in Engineering / Computer Science, Computer Engineering, Information Technology. (B.Tech / BE/ M.Tech/ MCA) - Full time What are the nature and scope of responsibilities the candidate should have handled? Understand customer's overall data estate, business principles, operations, and discover / assessment database workload Designing / complex, highly available, distributed, failsafe Cloud manage and unmanage database HLD and LLD document preparation Evaluate and recommend Cloud manage database services best suited for customer needs for optimal solution Drive Cloud manage database technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open source / Cloud manage / Cloud unmanage database technologies Knowledge & Skills Understanding of Public / Private / Hybrid Cloud solutions and Database services on Cloud Extensive experience in conducting Cloud Readiness Assessments for database environment and observing business / technical perspectives Knowledge of Cloud best practices and guidelines for database deployment Knowledge of cloud native HA-DR and database backup solutions Experience and Strong knowledge of Reference Architectures of Azure / GCP Azure / GCP certified Architect (preferred) Good Oral and Written communication Ability to work on a distributed and multi-cultural team Good understanding of ITSM processes and related tools Willing to learn and explore new technologies

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. Role Description Salesforce has immediate opportunities for software developers who want their lines of code to have significant and measurable positive impact for users, the company's bottom line, and the industry. You will be working with a group of world-class engineers to build the breakthrough features our customers will love, adopt, and use while keeping our trusted CRM platform stable and scalable. The software engineer role at Salesforce encompasses architecture, design, implementation, and testing to ensure we build products right and release them with high quality. We pride ourselves on writing high quality, maintainable code that strengthens the stability of the product and makes our lives easier. We embrace the hybrid model and celebrate the individual strengths of each team member while cultivating everyone on the team to grow into the best version of themselves. We believe that autonomous teams with the freedom to make decisions will empower the individuals, the product, the company, and the customers they serve to thrive. Your Impact As a Backend Software Engineer, your job responsibilities will include: Build new and exciting components in an ever-growing and evolving market technology to provide scale and efficiency. Develop high-quality, production-ready code that millions of users of our cloud platform can use. Design, implement, and tune robust APIs and API framework-related features that perform and scale in a multi-tenant environment. Work in a Hybrid Engineering model and contribute to all phases of SDLC including design, implementation, code reviews, automation, and testing of the features. Build efficient components/algorithms on a microservice multi-tenant SaaS cloud environment Code review, mentoring junior engineers, and providing technical guidance to the team (depending on the seniority level) Required Skills Mastery of multiple programming languages and platforms 3 + years of software development experience Deep knowledge of object-oriented programming and other scripting languages: Java, Python, Scala C#, Go, Node.JS and C++. Strong SQL skills and experience and experience with relational and non-relational databases e.g. (Postgress/Trino/redshift/Mongo). Experience with developing SAAS products over public cloud infrastructure - AWS/Azure/GCP. Proven experience designing and developing distributed systems at scale. A deeper understanding of software development best practices and demonstrate leadership skills. Degree or equivalent relevant experience required. Experience will be evaluated based on the core competencies for the role (e.g. extracurricular leadership roles, military experience, volunteer roles, work experience, etc.) Preferred Skills Experience with Big-Data/ML and S3 Hands-on experience with Streaming technologies like Kafka Experience with Elastic Search Experience with Terraform, Kubernetes, Docker Experience working in a high-paced and rapidly growing multinational organization Benefits & Perks Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit https://www.salesforcebenefits.com/ Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

At LeadSquared, we are committed to staying current with the latest technology trends and leveraging cutting-edge tech stacks to enhance our product. As a member of our engineering team, you will have the opportunity to work closely with the newest web and mobile technologies, tackling challenges related to scalability, performance, security, and cost optimization. Our primary objective is to create the industry's premier SaaS platform for sales execution, making LeadSquared an ideal place to embark on an exciting career. The role we are offering is tailored for developers with a proven track record in developing high-performance microservices using Golang, Redis, and various AWS Services. Your responsibilities will include deciphering business requirements and crafting solutions that are not only secure and scalable but also high-performing and easily testable. Key Requirements: - A minimum of 5 years of experience in constructing high-performance APIs and services, with a preference for Golang. - Proficiency in working with Data Streams such as Kafka or AWS Kinesis. - Hands-on experience with large-scale enterprise applications while adhering to best practices. - Strong troubleshooting and debugging skills, coupled with the ability to design and create reusable, maintainable, and easily debuggable applications. - Proficiency in GIT is essential. Preferred Skills: - Familiarity with Kubernetes and microservices. - Experience with OLAP databases/data warehouses like Clickhouse or Redshift. - Experience in developing and deploying applications on the AWS platform. If you are passionate about cutting-edge technologies, eager to tackle challenging projects, and keen on building innovative solutions, then this role at LeadSquared is the perfect opportunity for you to excel and grow in your career.,

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are looking for a seasoned Data Engineer with strong experience in designing and building modern data platforms from the ground up. This role involves close collaboration with architects, DevOps, and business teams to establish a scalable, secure, and high-performing data ecosystem. The ideal candidate is hands-on, cloud-savvy, and passionate about data infrastructure, governance, and engineering best practices.Job Title: Data Engineer Profilecode: LOC_DATA_ETL_SSA_1 At bpost group, data is more than numbers — it’s a strategic asset that drives critical decisions across our logistics and e-commerce operations. As we continue to evolve into a more data-driven organization, we are investing in cutting-edge infrastructure and talent to unlock deeper insights and enable smarter resource allocation. We are seeking a Data Engineer to join our Yield and Capacity Management team. In this role, you will play a central part in designing, building, and maintaining robust data pipelines and platforms that support advanced analytics, forecasting, and optimization of our operational capacity and pricing strategies. If you’re passionate about scalable data architecture, enjoy working at the intersection of business and technology, and want to make a tangible impact on performance and profitability, then this is your opportunity to help shape the future of data-driven logistics at bpost group. Role Summary: We are looking for a seasoned Data Engineer with strong experience in designing and building modern data platforms from the ground up. This role involves close collaboration with architects, DevOps, and business teams to establish a scalable, secure, and high-performing data ecosystem. The ideal candidate is hands-on, cloud-savvy, and passionate about data infrastructure, governance, and engineering best practices. Key Responsibilities: Platform Design and Architecture Design and implement the foundational architecture for a new enterprise-grade data platform Work with architects to define infrastructure, storage, and processing solutions aligned with business needs Ensure the platform adheres to security, compliance, and scalability standards Data Ingestion and Pipeline Development Build scalable, reusable data ingestion pipelines from structured and unstructured sources Develop batch and streaming data workflows using modern ETL/ELT frameworks Ensure data quality, lineage, and monitoring across pipelines Cloud and Infrastructure Integration Set up and configure cloud-native data services (e.g., AWS Glue, Redshift, S3, Lambda, EMR) Collaborate with DevOps to implement CI/CD for data pipeline deployments Support infrastructure-as-code practices (e.g., Terraform, CloudFormation) Governance and Operationalization Implement data cataloging, lineage, and access control mechanisms Define and enforce data platform usage standards, schema management, and audit logging Establish robust monitoring and alerting practices for data workflows Cross-Functional Collaboration Work closely with business teams, data scientists, and analysts to understand platform requirements Enable self-service access to data through tools and standardized interfaces Act as a technical advisor for data-related platform decisions Hands-On Engineering Contribute directly to the codebase using best practices for version control, testing, and documentation Perform peer reviews and contribute to technical design sessions Lead proof-of-concept implementations for new tools or frameworks Qualifications: Experience: 6–10 years of experience in data engineering, with at least 3 years building or re-architecting data platforms Proven track record in setting up cloud-based or hybrid data infrastructures Technical Skills: Deep knowledge of data warehousing, data lakes, and real-time data processing Relational database technologies: Snowflake, Postgres, Oracle Proficiency in SQL (dbt, sqlmesh), Python, and Spark or similar processing engines Experience with AWS, Azure, or GCP data services Familiarity with data modeling, schema evolution, and partitioning strategies Competency in modern data orchestration tools (e.g., Apache Airflow, dbt) High-level understanding of the capabilities and the role of different technical areas (cloud engineering, platform engineering, analytics engineering, ML engineering). Soft Skills: Strong problem-solving and systems thinking mindset Effective communication and stakeholder management abilities Ability to balance strategic planning with hands-on implementation Preferred Skills: Experience with Kubernetes, Docker, or serverless architectures Exposure to data mesh or domain-oriented data platform design Familiarity with tools like Apache Kafka, Delta Lake, or Iceberg

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad

Hybrid

Were looking for a talented and results-oriented Cloud Solutions Architect to work as a key member of Sureifys engineering team. Youll help build and evolve our next-generation cloud-based compute platform for digitally-delivered life insurance. Youll consider many dimensions such as strategic goals, growth models, opportunity cost, talent, and reliability. Youll collaborate closely with the product development team on platform feature architecture such that the architecture aligns with operational needs and opportunities. With the number of customers growing and growing, it’s time for us to mature the fabric our software runs on. This is your opportunity to make a large impact at a high-growth enterprise software company. Key Responsibilities : Collaborate with key stakeholders across our product, delivery, data and support teams to design scalable and secure application architectures on AWS using AWS Services like EC2, ECS, EKS, Lambdas, VPC, RDS, ElastiCache provisioned via Terraform Design and Implement CICD pipelines using Github, Jenkins and Spinnaker and Helm to automate application deployment and updates with key focus on container management, orchestration, scaling, optimizing performance and resource utilization, and deployment strategies Design and Implement security best practices for AWS applications, including Identity and Access Management (IAM), encryption, container security and secure coding practices Design and Implement best practices for Design and implement application observability using Cloudwatch and NewRelic with key considerations and focus on monitoring, logging and alerting to provide insights into application performance and health. Design and implement key integrations of application components and external systems, ensuring smooth and efficient data flow Diagnose and resolve issues related to application performance, availability and reliability Create, maintain and prioritise a quarter over quarter backlog by identifying key areas of improvement such as cost optimization, process improvement, security enhancements etc. Create and maintain comprehensive documentation outlining the infrastructure design, integrations, deployment processes, and configuration Work closely with the DevOps team and as a guide / mentor and enabler to ensure that the practices that you design and implement are followed and imbibed by the team Required Skills: Proficiency in AWS Services such as EC2, ECS, EKS, S3, RDS, VPC, Lambda, SES, SQS, ElastiCache, Redshift, EFS Strong Programming skills in languages such as Groovy, Python, Bash Shell Scripting Experience with CICD tools and practices including Jenkins, Spinnaker, ArgoCD Familiarity with IaC tools like Terraform or Cloudformation Understanding of AWS security best practices, including IAM, KMS Familiarity with Agile development practices and methodologies Strong analytical skills with the ability to troubleshoot and resolve complex issues Proficiency in using observability, monitoring and logging tools like AWS Cloudwatch, NewRelic, Prometheus Knowledge of container orchestration tools and concepts including Kubernetes and Docker Strong teamwork and communication skills with the ability to work effectively with cross function teams Nice to haves AWS Certified Solutions Architect - Associate or Professional

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Sr. AWS Data Engineer Years of experience: 5-10 years (with minimum 5 years of relevant experience) Work mode: WFO- Chennai (mandate) Type: Permanent Key skills: Python,SQL,Pyspark, AWS, Databricks, SQL, Data Modelling Essential Skills / Experience: 4 to 6 years of professional experience in Data Engineering or a related field. Strong programming experience with Python and experience using Python for data wrangling, pipeline automation, and scripting. Deep expertise in writing complex and optimized SQL queries on large-scale datasets. Solid hands-on experience with PySpark and distributed data processing frameworks. Expertise working with Databricks for developing and orchestrating data pipelines. Experience with AWS cloud services such as S3, Glue, EMR, Athena, Redshift, and Lambda. Practical understanding of ETL/ELT development patterns and data modeling principles (Star/Snowflake schemas). Experience with job orchestration tools like Airflow, Databricks Jobs, or AWS Step Functions. Understanding of data lake, lakehouse, and data warehouse architectures. Familiarity with DevOps and CI/CD tools for code deployment (e.g., Git, Jenkins, GitHub Actions). Strong troubleshooting and performance optimization skills in large-scale data processing environments. Excellent communication and collaboration skills, with the ability to work in cross-functional agile teams. Desirable Skills / Experience: AWS or Databricks certifications (e.g., AWS Certified Data Analytics, Databricks Data Engineer Associate/Professional). Exposure to data observability, monitoring, and alerting frameworks (e.g., Monte Carlo, Datadog, CloudWatch). Experience working in healthcare, life sciences, finance, or another regulated industry. Familiarity with data governance and compliance standards (GDPR, HIPAA, etc.). Knowledge of modern data architectures (Data Mesh, Data Fabric). Exposure to streaming data tools like Kafka, Kinesis, or Spark Structured Streaming. Experience with data visualization tools such as Power BI, Tableau, or QuickSight.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values : At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job Overview And Responsibilities United Airlines is seeking talented people to join the Data Engineering team as Sr AWS Redshift DBA. Data Engineering organization is responsible for driving data driven insights & innovation to support the data needs for commercial and operational projects with a digital focus. As a Redshift DBA, you will engage with Data Engineering and DevOps team (including internal customers) for redshift database administration initiatives. You will be involved in Database administration, performance tuning, management and security of AWS Redshift deployment and enterprise data warehouse You will provide technical support for all database environments including development/pre-production/production databases for the organization and will responsible for setting up of infrastructure, configuring, maintenance of environment including security to support cloud environment working along with DE and Cloud Engineering Develop and implement innovative solutions leading to automation Mentor and train junior engineers This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law.. Qualifications We are seeking Dedicated and skilled Redshift Database Administrator with a proven track record of optimizing database performance, ensuring data security, and implementing scalable solutions. Seeking to leverage expertise in Redshift and AWS to drive efficiency and reliability in a dynamic organization. Required BS/BA, in computer science or related STEM field Individuals who have a natural curiosity and desire to solve problems are encouraged to apply. 4+ years of IT experience preferably in Redshift Database Administration, SQL & Query optimization. Must have experience in performance Tuning & Monitoring. 3+ years of experience in scripting (Python, Bash) Database Security & Compliance. 4+ years in AWS Redshift production environment. 3+ years of experience with relational database systems like Oracle, Teradata 2+ years’ experience with Cloud migration of existing apps to AWS (on premise) Excellent and proven knowledge of Postgres/SQL on Amazon RDS Excellent and proven knowledge of SQL Must have managed Redshift clusters, including provisioning, monitoring, and performance tuning to ensure optimal query execution. Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English and Hindi (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Preferred Master’s in Computer Science or related STEM field Experience with cloud-based systems like AWS, AZURE or Google Cloud Certified Developer / Architect on AWS Strong experience with continuous integration & delivery using Agile methodologies Data engineering experience with transportation/airline industry Strong problem-solving skills Strong knowledge in Big Data GGN00001785

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Senior DB Developer – Sports/Healthcare Location: Ahmedabad, Gujarat. Job Type: Full-Time. Job Description: We are seeking an exceptional Senior Database Developer with 8+ years of expertise who will play a critical role in design and development of a scalable, configurable, and customizable platform. Our new Senior Database Developer will help with the design and collaborate with cross-functional teams and provide data solutions for delivering high-performance applications. If you are passionate about bringing innovative technology to life, owning and solving problems in an independent, fail fast and highly supportive environment, and working with a creative and dynamic team, we want to hear from you. This role requires a strong understanding of enterprise applications and large-scale data processing platforms. Key Responsibilities: ● Design and architect scalable, efficient, high-availability and secure database solutions to meet business requirements. ● Designing the Schema and ER Diagram for horizontal scalable architecture ● Strong knowledge of NoSQL / MongoDB ● Knowledge of ETL Tools for data migration from source to destination. ● Establish database standards, procedures, and best practices for data modelling, storage, security, and performance. ● Implement data partitioning, sharding, and replication for high-throughput systems. ● Optimize data lake, data warehouse, and NoSQL solutions for fast retrieval. ● Collaborate with developers and data engineers to define data requirements and optimize database performance. ● Implement database security policies ensuring compliance with regulatory standards (e.g., GDPR, HIPAA). ● Optimize and tune databases for performance, scalability, and availability. ● Design disaster recovery and backup solutions to ensure data protection and business continuity. ● Evaluate and implement new database technologies and frameworks as needed. ● Provide expertise in database migration, transformation, and modernization projects. ● Conduct performance analysis and troubleshooting of database-related issues. ● Document database architecture and standards for future reference. Required Skills and Qualifications: ● 8+ years of experience in database architecture, design, and management. ● Experience with AWS (Amazon Web Services) and similar platforms like Azure and GCP (Google Cloud Platform). ● Experience deploying and managing applications, utilizing various cloud services (compute, storage, databases, etc.) ● Experience with specific services like EC2, S3, Lambda (for AWS) ● Proficiency with SQL and NoSQL databases (e.g., PostgreSQL, MySQL, Oracle, MongoDB , Cassandra). ● MongoDB and NoSQL Experience is a big added advantage. ● Expertise in data modelling, schema design, indexing, and partitioning. ● Experience with ETL processes, data warehousing, and big data technologies (e.g. Apache NiFi, Airflow, Redshift, Snowflake, Hadoop). ● Proficiency in database performance tuning, optimization, and monitoring tools. ● Strong knowledge of data security, encryption, and compliance frameworks. ● Excellent analytical, problem-solving, and communication skills. ● Proven experience in database migration and modernization projects. Preferred Qualifications: ● Certifications in cloud platforms (AWS, GCP, Azure) or database technologies. ● Experience with machine learning and AI-driven data solutions. ● Knowledge of graph databases and time-series databases. ● Familiarity with Kubernetes, containerized databases, and microservices architecture. Education: ● Bachelor's or Master’s degree in Computer Science , Software Engineering , or related technical field. Why Join Us? ● Be part of an exciting and dynamic project in the sports/health data domain. ● Work with cutting-edge technologies and large-scale data processing systems. ● Collaborative, fast-paced team environment with opportunities for professional growth. Competitive salary, bonus, and benefits package

Posted 2 weeks ago

Apply

3.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 3-4 years of hands-on experience in data engineering, with a strong focus on AWS cloud services. Proficiency in Python for data manipulation, scripting, and automation. Strong command of SQL for data querying, transformation, and database management. Demonstrable Experience With AWS Data Services, Including Amazon S3: Data Lake storage and management. AWS Glue: ETL service for data preparation. Amazon Redshift: Cloud data warehousing. AWS Lambda: Serverless computing for data processing. Amazon EMR: Managed Hadoop framework for big data processing (Spark/PySpark experience highly preferred). AWS Kinesis (or Kafka): Real-time data streaming. Strong analytical, problem-solving, and debugging skills. Excellent communication and collaboration abilities, with the capacity to work effectively in an agile team environment. Responsibilities Troubleshoot and resolve data-related issues and performance bottlenecks in existing pipelines. Develop and maintain data quality checks, monitoring, and alerting mechanisms to ensure data pipeline reliability. Participate in code reviews, contribute to architectural discussions, and promote best practices in data engineering.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description About the Role Nielsen is seeking an organized, detail oriented, team player, to join the ITAM Back Office Engineering team in the role of Software Engineer. Nielsen’s Audience Measurement Engineering platforms support the measurement of television viewing in more than 30 countries around the world. Ideal candidates will have exceptional skills in programming, testing, debugging and problem solving as well as effective communication and writing skills. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 8 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Python , SQL and databases such as Postgres Experience in development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other Desirable Skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description Description - External United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values : At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job Overview And Responsibilities This role will be responsible for collaborating with the Business and IT teams to identify the value, scope, features and delivery roadmap for data engineering products and solutions. Responsible for communicating with stakeholders across the board, including customers, business managers, and the development team to make sure the goals are clear and the vision is aligned with business objectives. Perform data analysis using SQL Data Quality Analysis, Data Profiling and Summary reports Trend Analysis and Dashboard Creation based on Visualization technique Execute the assigned projects/ analysis as per the agreed timelines and with accuracy and quality. Complete analysis as required and document results and formally present findings to management Perform ETL workflow analysis, create current/future state data flow diagrams and help the team assess the business impact of any changes or enhancements Understand the existing Python code work books and write pseudo codes Collaborate with key stakeholders to identify the business case/value and create documentation. Should have excellent communication and analytical skills. This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law. Qualifications - External Required BE, BTECH or equivalent, in computer science or related STEM field 5+ years of total IT experience as either a Data Analyst/Business Data Analyst or as a Data Engineer 2+ years of experience with Big Data technologies like PySpark, Hadoop, Redshift etc. 3+ years of experience with writing SQL queries on RDBMS or Cloud based database Experience with Visualization tools such as Spotfire, PowerBI, Quicksight etc Experience in Data Analysis and Requirements Gathering Strong problem-solving skills Creative, driven, detail-oriented focus, requiring tackling of tough problems with data and insights. Natural curiosity and desire to solve problems. Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English and Hindi (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Qualifications Preferred AWS Certification preferred Strong experience with continuous integration & delivery using Agile methodologies Data engineering experience with transportation/airline industry GGN00002145

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

As a key member of our data platform team, you'll be tasked with the development of our next-gen cutting-edge data platform. Your responsibilities will include building robust data pipelines for data acquisition, processing, and implementing optimized data models, creating APIs and data products to support our machine learning models, insights engine, and customer-facing applications. Additionally, you'll harness the power of GenAI throughout the data platform lifecycle, while maintaining a strong focus on data governance to uphold timely data availability with high accuracy. Requirements Bachelor's degree or higher in Computer Science, Engineering, or related field with 5+ years of experience in data engineering with a strong focus on designing and building scalable data platforms and products. Proven expertise in data modeling, ETL/ELT processes, and data warehousing with distributed computing - Hadoop, Spark, and Kafka. Proficient in programming languages such as Python/PySpark Experience with cloud services such as AWS, Azure, or GCP and related services (S3 Redshift, BigQuery, Dataflow). Strong understanding of SQL / NoSQL databases (e. g. PostGres, MySQL, Cassandra). Proven expertise in data quality checks to ensure data accuracy, completeness, consistency, and timeliness. Excellent problem-solving in a fast-paced, collaborative environment, coupled with strong communication for effective interaction with tech and non-tech stakeholders. This job was posted by Sanoop Kannoli from Enlyft.

Posted 2 weeks ago

Apply

3.0 years

6 - 8 Lacs

Thiruvananthapuram Taluk, India

On-site

Position- Data Engineer Experience- 3+ years Location : Trivandrum, Hybrid Salary : Upto 8 LPA Job Summary We are seeking a highly motivated and skilled Data Engineer with 3+ years of experience to join our growing data team. In this role, you will be instrumental in designing, building, and maintaining robust, scalable, and efficient data pipelines and infrastructure. You will work closely with data scientists, analysts, and other engineering teams to ensure data availability, quality, and accessibility for various analytical and machine learning initiatives. Key Responsibilities Design and Development: ○ Design, develop, and optimize scalable ETL/ELT pipelines to ingest, transform, and load data from diverse sources into data warehouses/lakes. ○ Implement data models and schemas that support analytical and reporting requirements. ○ Build and maintain robust data APIs for data consumption by various applications and services. Data Infrastructure: ○ Contribute to the architecture and evolution of our data platform, leveraging cloud services (AWS, Azure, GCP) or on-premise solutions. ○ Ensure data security, privacy, and compliance with relevant regulations. ○ Monitor data pipelines for performance, reliability, and data quality, implementing alerting and anomaly detection. Collaboration & Optimization: ○ Collaborate with data scientists, business analysts, and product managers to understand data requirements and translate them into technical solutions. ○ Optimize existing data processes for efficiency, cost-effectiveness, and performance. ○ Participate in code reviews, contribute to documentation, and uphold best practices in data engineering. Troubleshooting & Support: ○ Diagnose and resolve data-related issues, ensuring minimal disruption to data consumers. ○ Provide support and expertise to teams consuming data from the data platform. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related quantitative field. 3+ years of hands-on experience as a Data Engineer or in a similar role. Strong proficiency in at least one programming language commonly used for data engineering (e.g., Python, Java, Scala). Extensive experience with SQL and relational databases (e.g., PostgreSQL, MySQL, SQL Server). Proven experience with ETL/ELT tools and concepts. Experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Data Bricks). Familiarity with cloud platforms (AWS, Azure, or GCP) and their data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Blob Storage, BigQuery, Dataflow). Understanding of data modeling techniques (e.g., dimensional modeling, Kimball, Inmon). Experience with version control systems (e.g., Git). Excellent problem-solving, analytical, and communication skills. Preferred Qualifications Master's degree in a relevant field. Experience with Apache Spark (PySpark, Scala Spark) or other big data processing frameworks. Familiarity with NoSQL databases (e.g., MongoDB, Cassandra). Experience with data streaming technologies (e.g., Kafka, Kinesis). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Experience with workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Step Functions). Understanding of DevOps principles as applied to data pipelines. Prior experience in Telecom is a plus. Skills: data streaming technologies (kafka, kinesis),azure,data modeling,apache spark,workflow orchestration tools (apache airflow, azure data factory, aws step functions),pipelines,apache,data engineering,kubernetes,cloud,programming languages (python, java, scala),docker,data apis,data warehousing,aws,version control systems (git),python,,cloud services (aws, azure, gcp),sql,nosql databases (mongodb, cassandra),etl/elt pipelines

Posted 2 weeks ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 13-17 Lpa Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Experience: total work: 1 year (Preferred) Work Location: In person

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies