Home
Jobs

2009 Redshift Jobs - Page 7

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Kerala, India

On-site

Linkedin logo

Senior Data Engineer – AWS Expert (Lead/Associate Architect Level) 📍 Location: Trivandrum or Kochi (On-site/Hybrid) Experience:10+ Years (Relevant exp in AWS- 5+ is mandatory) About the Role We’re hiring a Senior Data Engineer with deep expertise in AWS services , strong hands-on experience in data ingestion, quality, and API development , and the leadership skills to operate at a Lead or Associate Architect level . This role demands a high level of technical ownership , especially in architecting scalable, reliable data pipelines and robust API integrations. You’ll collaborate with cross-functional teams across geographies, so a willingness to work night shifts overlapping with US hours (till 10 AM IST) is essential. Key Responsibilities Data Engineering Leadership : Design and implement scalable, end-to-end data ingestion and processing frameworks using AWS. AWS Architecture : Hands-on development using AWS Glue, Lambda, EMR, Step Functions, S3, ECS , and other AWS services. Data Quality & Validation : Build automated checks, validation layers, and monitoring for ensuring data accuracy and integrity. API Development : Develop secure, high-performance REST APIs for internal and external data integration. Collaboration : Work closely with product, analytics, and DevOps teams across geographies. Participate in Agile ceremonies and CI/CD pipelines using tools like GitLab. What We’re Looking For Experience : 5+ years in Data Engineering, with a proven track record in designing scalable AWS-based data systems. Technical Mastery : Proficient in Python/PySpark, SQL, and building big data pipelines. AWS Expert : Deep knowledge of core AWS services used for data ingestion and processing. API Expertise : Experience designing and managing scalable APIs. Leadership Qualities : Ability to work independently, lead discussions, and drive technical decisions. Preferred Qualifications Experience with Kinesis, Firehose, SQS , and data lakehouse architectures . Exposure to tools like Apache Iceberg , Aurora , Redshift , and DynamoDB . Prior experience in distributed, multi-cluster environments. Working Hours US Time Zone Overlap Required : Must be available to work night shifts overlapping with US hours (up to 10:00 AM IST). Work Location Trivandrum or Kochi – On-site or hybrid options available for the right candidate. Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Summary We are looking for an accomplished and dynamic Data Engineering Lead to join our team and drive the design, development, and delivery of cutting-edge data solutions. This role requires a balance of strong technical expertise, strategic leadership, and a consulting mindset. As the Lead Data Engineer, you will oversee the design and development of robust data pipelines and systems, manage and mentor a team of 5 to 7 engineers, and play a critical role in architecting innovative solutions tailored to client needs. You will lead by example, fostering a culture of accountability, ownership, and continuous improvement while delivering impactful, scalable data solutions in a fast-paced, consulting environment. Job Responsibilities Client Collaboration: · Act as the primary point of contact for US-based clients, ensuring alignment on project goals, timelines, and deliverables. · Engage with stakeholders to understand requirements and ensure alignment throughout the project lifecycle. · Present technical concepts and designs to both technical and non-technical audiences. · Communicate effectively with stakeholders to ensure alignment on project goals, timelines, and deliverables. · Set realistic expectations with clients and proactively address concerns or risks. Data Solution Design and Development: · Architect, design, and implement end-to-end data pipelines and systems that handle large-scale, complex datasets. · Ensure optimal system architecture for performance, scalability, and reliability. · Evaluate and integrate new technologies to enhance existing solutions. · Implement best practices in ETL/ELT processes, data integration, and data warehousing. Project Leadership and Delivery: · Lead technical project execution, ensuring timelines and deliverables are met with high quality. · Collaborate with cross-functional teams to align business goals with technical solutions. · Act as the primary point of contact for clients, translating business requirements into actionable technical strategies. Team Leadership and Development: · Manage, mentor, and grow a team of 5 to 7 data engineers; Ensure timely follow-ups on action items and maintain seamless communication across time zones. · Conduct code reviews, validations, and provide feedback to ensure adherence to technical standards. · Provide technical guidance and foster an environment of continuous learning, innovation, and collaboration. · Support collaboration and alignment between the client and delivery teams. Optimization and Performance Tuning: · Be hands-on in developing, testing, and documenting data pipelines and solutions as needed. · Analyze and optimize existing data workflows for performance and cost-efficiency. · Troubleshoot and resolve complex technical issues within data systems. Adaptability and Innovation: · Embrace a consulting mindset with the ability to quickly learn and adopt new tools, technologies, and frameworks. · Identify opportunities for innovation and implement cutting-edge technologies in data engineering. · Exhibit a "figure it out" attitude, taking ownership and accountability for challenges and solutions. Learning and Adaptability: · Stay updated with emerging data technologies, frameworks, and tools. · Actively explore and integrate new technologies to improve existing workflows and solutions. Internal Initiatives and Eminence Building: · Drive internal initiatives to improve processes, frameworks, and methodologies. · Contribute to the organization’s eminence by developing thought leadership, sharing best practices, and participating in knowledge-sharing activities. Essential Skills Qualifications Education: · Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. · Certifications in cloud platforms such as Snowflake Snowpro, Data Engineer is a plus. Experience: · 8+ years of experience in data engineering with hands-on expertise in data pipeline development, architecture, and system optimization · Demonstrated success in managing global teams, especially across US and India time zones. · Proven track record in leading data engineering teams and managing end-to-end project delivery. · Strong background in data warehousing and familiarity with tools such as Matillion, dbt, Striim, etc. Technical Skills: · Lead the design, development, and deployment of scalable data architectures, pipelines, and processes tailored to client needs · Expertise in programming languages such as Python, Scala, or Java. · Proficiency in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. · Solid understanding of database systems (relational and NoSQL) and data modeling techniques. · Hands-on experience of 2+ years in designing and developing data integration solutions using Matillion and/or dbt. · Strong knowledge of data engineering and integration frameworks. · Expertise in architecting data solutions. · Successfully implemented at least two end-to-end projects with multiple transformation layers. · Good grasp of coding standards, with the ability to define standards and testing strategies for projects. · Proficiency in working with cloud platforms (AWS, Azure, GCP) and associated data services. · Enthusiastic about working in Agile methodology. · Possess a comprehensive understanding of the DevOps process including GitHub integration and CI/CD pipelines. Soft Skills: · Exceptional problem-solving and analytical skills. · Strong communication and interpersonal skills to manage client relationships and team dynamics. · Ability to thrive in a consulting environment, quickly adapting to new challenges and domains. · Ability to handle ambiguity and proactively take ownership of challenges. · Demonstrated accountability, ownership, and a proactive approach to solving problems. Background Check required No criminal record Others · Bachelor’s or master’s degree in computer science, Engineering, or related field, or equivalent practical experience · There are 2-3 rounds in the interview process. · This is 5 days work from office role (No Hybrid/ Remote options available) Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Overview Job Description About Business Unit SaaSOps manages post-production support and the overall experience of Epsilon PeopleCloud products for our global clients. This function is responsible for product support, incident management, managed operations and the automation of processes. The team has successfully incubated and mainstreamed Site Reliability Engineering (SRE) as a practice, to ensure reliable product operations on a global scale. Plus, the team is actively pioneering the adoption of AI in operations (AIOps) and recently launched AI-driven self-service capabilities to enhance operational efficiency and improve client experiences. Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. Responsibilities What you will do: (Roles and responsibilities) Collaborate with other software developers, business analysts and software architects to plan, design, develop, test, and maintain web-based business applications built on Microsoft and other Similar frameworks and Technologies. Excellent skills and hands on experience in developing frontend applications along with middleware and backend Maintain high standards of software quality within the team by establishing best practices and processes. Ability to think creatively to push beyond the boundaries of existing practices and mindsets. Use knowledge to create new and improve existing processes in terms of design and performance. Package and support deployment of releases. Participate , plan and execute in team building activities fun activities. Qualifications Essential skills & experience: Bachelor’s degree in Computer Science or a related field or have equivalent experience. 3-4 years of experience in Software Engineering. Demonstrated experience driving delivery through strong delivery practices, across complex programs of work. Strong communication skills Must be detail-oriented and can manage multiple tasks simultaneously. Willingness to learn new skills and apply them for developing new-age applications. Experience with web development technologies and frameworks including .Net Framework, REST APIs, MVC Working knowledge of database technology such as SQL, Oracle , DynamoDB. Basic Oracle SQL and PL/SQL is a must. Proficiency in HTML, CSS, JavaScript, and J-query Unit Testing (NUnit). Cloud (AWS/Azure). Knowledge of version control tools like GitHub, VSTS etc. is a must. Agile Development, Dev Ops (CI/CD). C# and .Net Framework, Web APIs Debugging, Troubleshooting skills Drive things independently with minimal supervision. Development web APIs in JSON and troubleshooting them during Production issues Desirable Skills & Experience API testing through Postman or Ready API Responsive web (Bootstrap). Experience with Unix/Linux command-line and bash shell is good to have. Experience in AWS, Redshift or equivalent databases, Lambda functions, Snowflake DB types. Proficient in Unix Shell scripting and Python. Knowledge of AWS EC2, S3, AMI etc. Security scan and Vulnerabilities fix for web applications. Kibana tool validations and analysis Personal Attributes Professionalism and integrity Self-starter Excellent command of verbal and written English Well organized, with the ability to coordinate development across multiple team members Commitment to continuous learning and team/individual growth Ability to quickly adapt to changing tech landscape. Analysis and problem solving skill Additional Information Epsilon is a global data, technology and services company that powers the marketing and advertising ecosystem. For decades, we’ve provided marketers from the world’s leading brands the data, technology and services they need to engage consumers with 1 View, 1 Vision and 1 Voice. 1 View of their universe of potential buyers. 1 Vision for engaging each individual. And 1 Voice to harmonize engagement across paid, owned and earned channels. Epsilon’s comprehensive portfolio of capabilities across our suite of digital media, messaging and loyalty solutions bridge the divide between marketing and advertising technology. We process 400+ billion consumer actions each day using advanced AI and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Epsilon is a global company with more than 9,000 employees around the world. Epsilon Has a Core Set Of 5 Values That Define Our Culture And Guide Us To Create Value For Our Clients, Our People And Consumers. We Are Seeking Candidates That Align With Our Company Values, Demonstrate Them And Make Them Meaningful In Their Day-to-day Work Act with integrity. We are transparent and have the courage to do the right thing. Work together to win together. We believe collaboration is the catalyst that unlocks our full potential. Innovate with purpose. We shape the market with big ideas that drive big outcomes. Respect all voices. We embrace differences and foster a culture of connection and belonging. Empower with accountability. We trust each other to own and deliver on common goals. Because You Matter YOUniverse. A work-world with you at the heart of it! At Epsilon, we believe people make the place. And everything we do is designed with you in mind. That’s why our work-world, aptly named ‘YOUniverse’ is focused on creating a nurturing environment that elevates your growth, wellbeing and work-life harmony. So, come be part of a people-centric workspace where care for you is at the core of all we do. Take a trip to YOUniverse and explore our unique benefits, here. Epsilon is an Equal Opportunity Employer. Epsilon is committed to promoting diversity, inclusion, and equal employment opportunities by using reasonable efforts to attract, recruit, engage and retain qualified individuals of all ethnicities and backgrounds, including, but not limited to, women, people of color, LGBTQ individuals, people with disabilities and any other underrepresented groups, traits or characteristics. Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Engineering Business Unit Overview The charter for Engineering group at Oportun is to be the world-class engineering force behind our innovative products. The group plays a vital role in designing, developing, and maintaining cutting-edge software solutions that power our mission and advance) our business. We strike a balance between leveraging leading tools and developing in-house solutions to create member experiences that empower their financial independence. The talented engineers in this group are dedicated to delivering and maintaining performant, elegant, and intuitive systems to our business partners and retail members. Our platform combines service-oriented platform features with sophisticated user experience and is enabled through a best-in-class (and fun to use!) automated development infrastructure. We prove that FinTech is more fun, more challenging, and in our case, more rewarding as we build technology that changes our members’ lives. Engineering at Oportun is responsible for high quality and scalable technical execution to achieve business goals and product vision. They ensure business continuity to members by effectively managing systems and services - overseeing technical architectures and system health. In addition, they are responsible for identifying and executing on the technical roadmap that enables product vision as well as fosters member & business growth in a scalable and efficient manner. The Enterprise Data and Technology (EDT) pillar within the Engineering Business Unit focusses on enabling wide use of corporate data assets whilst ensuring quality, availability and security across the data landscape. Position Overview As a Senior Data Engineer at Oportun, you will be a key member of our EDT team, responsible for designing, developing, and maintaining sophisticated software / data platforms in achieving the charter of the engineering group. Your mastery of a technical domain enables you to take up business problems and solve them with a technical solution. With your depth of expertise and leadership abilities, you will actively contribute to architectural decisions, mentor junior engineers, and collaborate closely with cross-functional teams to deliver high-quality, scalable software solutions that advance our impact in the market. This is a role where you will have the opportunity to take up responsibility in leading the technology effort – from technical requirements gathering to final successful delivery of the product - for large initiatives (cross functional and multi-month long projects). Responsibilities Data Architecture and Design: Lead the design and implementation of scalable, efficient, and robust data architectures to meet business needs and analytical requirements. Collaborate with stakeholders to understand data requirements, build subject matter expertise and define optimal data models and structures. Data Pipeline Development and Optimization: Design and develop data pipelines, ETL processes, and data integration solutions for ingesting, processing, and transforming large volumes of structured and unstructured data. Optimize data pipelines for performance, reliability, and scalability. Database Management and Optimization: Oversee the management and maintenance of databases, data warehouses, and data lakes to ensure high performance, data integrity, and security. Implement and manage ETL processes for efficient data loading and retrieval. Data Quality and Governance: Establish and enforce data quality standards, validation rules, and data governance practices to ensure data accuracy, consistency, and compliance with regulations. Drive initiatives to improve data quality and documentation of data assets. Mentorship and Leadership: Provide technical leadership and mentorship to junior team members, assisting in their skill development and growth. Lead and participate in code reviews, ensuring best practices and high-quality code. Collaboration and Stakeholder Management: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand their data needs and deliver solutions that meet those needs. Communicate effectively with non-technical stakeholders to translate technical concepts into actionable insights and business value. Performance Monitoring and Optimization: Implement monitoring systems and practices to track data pipeline performance, identify bottlenecks, and optimize for improved efficiency and scalability. Common Software Engineering Requirements You actively contribute to the end-to-end delivery of complex software applications, ensuring adherence to best practices and high overall quality standards. You have a strong understanding of a business or system domain with sufficient knowledge & expertise around the appropriate metrics and trends. You collaborate closely with product managers, designers, and fellow engineers to understand business needs and translate them into effective software solutions. You provide technical leadership and expertise, guiding the team in making sound architectural decisions and solving challenging technical problems. Your solutions anticipate scale, reliability, monitoring, integration, and extensibility. You conduct code reviews and provide constructive feedback to ensure code quality, performance, and maintainability. You mentor and coach junior engineers, fostering a culture of continuous learning, growth, and technical excellence within the team. You play a significant role in the ongoing evolution and refinement of current tools and applications used by the team, and drive adoption of new practices within your team. You take ownership of (customer) issues, including initial troubleshooting, identification of root cause and issue escalation or resolution, while maintaining the overall reliability and performance of our systems. You set the benchmark for responsiveness and ownership and overall accountability of engineering systems. You independently drive and lead multiple features, contribute to (a) large project(s) and lead smaller projects. You can orchestrate work that spans multiples engineers within your team and keep all relevant stakeholders informed. You support your lead/EM about your work and that of the team, that they need to share with the stakeholders, including escalation of issues Requirements Bachelor's or Master's degree in Computer Science, Data Science, or a related field. 5+ years of experience in data engineering, with a focus on data architecture, ETL, and database management. Proficiency in programming languages like Python/Pyspark and Java /Scala Expertise in big data technologies such as Hadoop, Spark, Kafka, etc. In-depth knowledge of SQL and experience with various database technologies (e.g., PostgreSQL, MySQL, NoSQL databases). Experience and expertise in building complex end-to-end data pipelines. Experience with orchestration and designing job schedules using the CICD tools like Jenkins and Airflow. Ability to work in an Agile environment (Scrum, Lean, Kanban, etc) Ability to mentor junior team members. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse). Strong leadership, problem-solving, and decision-making skills. Excellent communication and collaboration abilities. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Pune

On-site

Job Description Intern- Data Solutions As an Intern- Data Solutions , you will be part of the Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your specific responsibilities will include: Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education: B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required experience: High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Understanding in creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Hands on with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Hands on with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Intern/Co-op (Fixed Term) Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills: Job Posting End Date: 06/16/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R344334

Posted 2 days ago

Apply

10.0 years

0 Lacs

Noida

On-site

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role: Service Desk Manager ͏ Job Description: Java/Microservices/ Mainframe - Skills Backend Server-side Framework – Java 17 , Spring Boot, Spring MVC Microservice Architecture, Rest API Design Unit Testing Framework, Junit, Mockito Performance Tuning & Profiling – Visual VM, JMETER Logging, Troubleshooting – New Relic, Dynatrace, Kibana, Cloud Watch Database Relational database – Oracle, MSSQL, MySql, Postgres, DB2 Nosql database – MongoDB, Redis, DynamoDB Cloud Platform, Deployment and DevOps Cloud Platforms – AWS (ECS, EC2, Cloud front, Cloud Watch,S3, IAM, Route 53, ALB), Redshift, RDS DevOps – Docker containers, SonarQube, Jenkins, Git, Github, Github-actions, CI/CD pipelines, Terraform Mainframe COBOL JCL DB2 (Advance SQL skills) Alight COOL Tools: Xpeditor, File-Aid (Smart-File), DumbMaster (AbendAid) Omegamon iStrobe/strobe Roles & Responsibilities Ensure the architectural needs of the value stream are realized rather than those of just a channel or a team Strong knowledge of Design principles and implementing the same in designing and building the robust and scalable software solutions across the full stack having no/least adoption effort by clients Work with Product Management, Product Owners, and other value stream stakeholders to help ensure strategy and execution alignment Help manage risks, dependencies and troubleshoot and pro-actively solve problems Collaboration and review with other Architects and participating in Architecture COP and ARB Owning Performance & Non-Functional requirement including end to end ownership Acquire broader insight of the Product and Enterprise Architecture Spread broader knowledge with other engineers in the value stream Proven experience on stated engineering skills Aggregate program PI objectives into value stream PI objectives and publish them for visibility and transparency Assess the agility level of the program/value stream and help improve ͏ ͏ ͏ Competencies Client Centricity Collaborative Working Learning Agility Problem Solving & Decision Making Effective communication Mandatory Skills: Technology (Alight IT). Experience: >10 YEARS. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance Show more Show less

Posted 2 days ago

Apply

1.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description: Location: Indore, Noida, Pune and Bengaluru Qualifications: BE/B.Tech/MCA/M.Tech/M.Com in Computer Science or related field Required Skills: EDW Expertise: Hands-on experience with Teradata or Oracle. PL/SQL Proficiency: Strong ability to write complex queries. Performance Tuning: Expertise in optimizing queries to meet SLA requirements. Communication: Strong verbal and written communication skills. Experience Required (1-3 Years) Preferred Skills: Cloud Technologies: Working knowledge of AWS S3 and Redshift or equivalent. Database Migration: Familiarity with database migration processes. Big Data Tools: Understanding of SparkQL, and PySpark. Programming: Experience with Python for data processing and analytics. Data Management: Experience with import/export operations. Roles & Responsibilities Module Ownership: Manage a module and assist the team. Optimized PL/SQL Development: Write efficient queries. Performance Tuning: Improve database speed and efficiency. Requirement Analysis: Work with business users to refine needs. Application Development: Build solutions using complex SQL queries. Data Validation: Ensure integrity of large datasets (TB/PB). Testing & Debugging: Conduct unit testing and fix issues. Database Strategies: Apply best practices for development. Interested candidates can share their resumes at anubhav.pathania@impetus.com Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

We deliver the world’s most complex projects Work as part of a collaborative and inclusive team Enjoy a varied & challenging role Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role Develop and implement data pipelines for ingesting and collecting data from various sources into a centralized data platform. Develop and maintain ETL jobs using AWS Glue services to process and transform data at scale. Optimize and troubleshoot AWS Glue jobs for performance and reliability. Utilize Python and PySpark to efficiently handle large volumes of data during the ingestion process. Collaborate with data architects to design and implement data models that support business requirements. Create and maintain ETL processes using Airflow, Python and PySpark to move and transform data between different systems. Implement monitoring solutions to track data pipeline performance and proactively identify and address issues. Manage and optimize databases, both SQL and NoSQL, to support data storage and retrieval needs. Familiarity with Infrastructure as Code (IaC) tools like Terraform, AWS CDK and others. Proficiency in event-driven integrations, batch-based and API-led data integrations. Proficiency in CICD pipelines such as Azure DevOps, AWS pipelines or Github Actions. About You To be considered for this role it is envisaged you will possess the following attributes: Technical and Industry Experience: Independent Integration Developer with over 5+ years of experience in developing and delivering integration projects in an agile or waterfall-based project environment. Proficiency in Python, PySpark and SQL programming language for data manipulation and pipeline development Hands-on experience with AWS Glue, Airflow, Dynamo DB, Redshift, S3 buckets, Event-Grid, and other AWS services Experience implementing CI/CD pipelines, including data testing practices. Proficient in Swagger, JSON, XML, SOAP and REST based web service development Behaviors Required: Driven by our values and purpose in everything we do. Visible, active, hands on approach to help teams be successful. Strong proactive planning ability. Optimistic, energetic, problem solver, ability to see long term business outcomes. Collaborative, ability to listen, compromise to make progress. Stronger together mindset, with a focus on innovation & creation of tangible / realized value. Challenge status quo. Education – Qualifications, Accreditation, Training: Degree in Computer Science and/or related fields AWS Data Engineering certifications desirable Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Mumbai Job Digital Solutions Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jun 4, 2025 Unposting Date Jul 4, 2025 Reporting Manager Title Director Show more Show less

Posted 2 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Partner with Data Science, Product Manager, Analytics, and Business teams to review and gather the data/reporting/analytics requirements and build trusted and scalable data models, data extraction processes, and data applications to help answer complex questions. Design and implement data pipelines to ETL data from multiple sources into a central data warehouse. Design and implement real-time data processing pipelines using Apache Spark Streaming. Improve data quality by leveraging internal tools/frameworks to automatically detect and mitigate data quality issues. Develop and implement data governance procedures to ensure data security, privacy, and compliance. Implement new technologies to improve data processing and analysis. Coach and mentor junior data engineers to enhance their skills and foster a collaborative team environment. Qualifications A BE in Computer Science or equivalent with 8+ years of professional experience as a Data Engineer or in a similar role Experience building scalable data pipelines in Spark using Airflow scheduler/executor framework or similar scheduling tools. Experience with Databricks and its APIs. Experience with modern databases (Redshift, Dynamo DB, Mongo DB, Postgres or similar) and data lakes. Proficient in one or more programming languages such as Python/Scala and rock-solid SQL skills. Champion automated builds and deployments using CICD tools like Bitbucket, Git Experience working with large-scale, high-performance data processing systems (batch and streaming) Our perks & benefits Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more. About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less

Posted 2 days ago

Apply

4.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Location : Mumbai (Bandra) Experience : 4 to 6 years Work Model: Full-time | Roster-based (24/7 support model) Travel: Occasional (for cloud initiatives, conferences, or training) The candidate should have hands-on experience in developing and managing AWS infrastructure and DevOps setup , along with the ability to delegate and distribute tasks effectively within the team. Key Responsibilities Architect, deploy, automate, and manage AWS-based production systems. Ensure high availability, scalability, security, and reliability of cloud environments. Troubleshoot complex issues across multiple cloud applications and platforms. Design, implement, and maintain tools to automate operational processes. Provide operational support and engineering assistance for cloud-related issues and deployments. Lead platform security initiatives in collaboration with development and security teams. Define, maintain, and improve policies and standards for Infrastructure as Code (IaC) and CI/CD practices. Work closely with Product Owners and development teams to identify and deliver continuous improvements. Mentor junior team members, manage workloads, and ensure timely task execution. Required Technical Skills Cloud & AWS Services Strong hands-on experience with AWS core services including: Compute & Networking: EC2, VPC, VPN, EKS, Lambda Storage & CDN: S3, CloudFront Monitoring & Logging: CloudWatch Security: IAM, Secrets Manager, SNS Data & Analytics: Kinesis, RedShift, EMR DevOps Services: CodeCommit, CodeBuild, CodePipeline, ECR Other Services: Route53, AWS Organizations DevOps & Tooling Experience with tools like: CI/CD & IaC: Terraform, FluxCD Security & Monitoring: Prisma Cloud, SonarQube, Site24x7 API & Gateway Management: Kong Preferred Qualifications : AWS Certified Solutions Architect – Associate or Professional Experience in managing or mentoring cross-functional teams Exposure to multi-cloud or hybrid cloud architectures is a plus Show more Show less

Posted 2 days ago

Apply

5.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Company Description BEYOND SOFTWARES AND CONSULTANCY SERVICES Pvt. Ltd. (BSC Services) is committed to delivering innovative solutions to meet clients' evolving needs, particularly in the telecommunication industry. We provide a variety of software solutions for billing and customer management, network optimization, and data analytics. Our skilled team of software developers and telecom specialists collaborates closely with clients to understand their specific requirements and deliver high-quality, secure software solutions. We strive to build long-term relationships based on trust, transparency, and open communication, ensuring our clients stay competitive and grow in a dynamic market. Role Description We are looking for a Data Engineer with expertise in DBT and Airflow for a full-time remote position. The Data Engineer will be responsible for designing, developing, and managing data pipelines and ETL processes. Day-to-day tasks include data modeling, data warehousing, and implementing data analytics solutions. The role involves collaborating with cross-functional teams to ensure data integrity and optimize data workflows. Must Have Skills: 5 to 10 years IT Experience in data transformation in Amazon RedShift- Datawarehouse using Apache Airflow, Data Build Tool and Cosmos. Hands-on experience working in complex data warehouse implementations. Expert in Advance SQL. The Data Engineer will be responsible for designing, developing, testing and maintaining data pipelines using AWS RedShift, DBT, and Airflow. Experienced in Data Analytical skills. Minimum 5 years of Hands-on-Experience in Amazon RedShift Datawarehouse. Experience of dbt (Data Build Tool) for data transformation. Experience in developing, scheduling & monitoring workflow orchestration using Apache Airflow. Experienced in Astro & Cosmos library. Experience in construction of the DAG in Airflow. Experience of DevOps: BitBucket or Experience of Github /Gitlab Minimum 5 years of experience in Data Transformation projects . Development of data ingestion pipelines and robust ETL frameworks. Strong hands-on experience in analysing data on large datasets. Extensive experience in dimensional data modelling includes complex entity relationships and historical data entities. Implementation of data cleansing and data quality features in ETL pipelines. Implementation of data streaming solutions from different sources for data migration & transformation. · Extensive Data Engineering experience using Python. Experience in SQL and Performance Tuning. Hands on experience parsing responses generated by API's (REST/XML/JSON). Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Company Description ThreatXIntel is a startup cyber security company specializing in cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We offer customized, affordable solutions tailored to meet the specific needs of businesses of all sizes. Our proactive approach to security involves continuous monitoring and testing to identify vulnerabilities before they can be exploited. Role Description We are looking for a skilled freelance Data Engineer with expertise in PySpark and AWS data services , particularly S3 and Redshift . Familiarity with Salesforce data integration is a plus. This role focuses on building scalable data pipelines and supporting analytics use cases in a cloud-native environment. Key Responsibilities Design and develop ETL/ELT data pipelines using PySpark for large-scale data processing Ingest, transform, and store data across AWS S3 (data lake) and Amazon Redshift (data warehouse) Integrate data from Salesforce into the cloud data ecosystem for analysis Optimize data workflows for performance and cost-efficiency Write efficient code and queries for structured and unstructured data Collaborate with analysts and stakeholders to deliver clean, usable datasets Required Skills Strong hands-on experience with PySpark Proficient in AWS services, especially S3 and Redshift Basic working knowledge of Salesforce data structure or API Ability to write complex SQL for data transformation and reporting Familiarity with version control and Agile collaboration tools Good communication and documentation skills Show more Show less

Posted 2 days ago

Apply

1.0 - 3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Summary We are hiring a Data Analyst to turn complex data into actionable insights using Power BI, Tableau, QuickSight, and SQL. You’ll collaborate with teams across the organization to design dashboards, optimize data pipelines, and drive data literacy. This is an onsite role in Noida, ideal for problem-solvers passionate about data storytelling. Key Responsibilities Dashboard & Reporting: Develop interactive dashboards in Power BI, Tableau, and QuickSight with drill-down capabilities. Automate reports and ensure real-time data accuracy. Data Analysis & SQL: Write advanced SQL queries (window functions, query optimization) for large datasets. Perform root-cause analysis on data discrepancies. ETL & Data Pipelines: Clean, transform, and model data using ETL tools (e.g. Python scripts). Work with cloud data warehouses (Snowflake, Redshift, BigQuery). Stakeholder Collaboration: Translate business requirements into technical specs. Train non-technical teams on self-service analytics tools. Performance Optimization: Improve dashboard load times and SQL query efficiency. Implement data governance best practices. Technical Skills Must-Have: ✔ BI Tools: Power BI (DAX, Power Query), Tableau (LODs, parameters), QuickSight (SPICE, ML insights) ✔ SQL: Advanced querying, indexing, stored procedures ✔ Data Modeling: Star schema, normalization, performance tuning ✔ Excel/Sheets: PivotTables, VLOOKUP, Power Query Nice-to-Have: ☑ Programming: Python/R (Pandas, NumPy) for automation ☑ Cloud Platforms: AWS (QuickSight, S3), Azure (Synapse), GCP ☑ Version Control: Git, GitHub Soft Skills Strong communication to explain insights to non-technical teams. Curiosity to explore data anomalies and trends. Project management (Agile/Scrum familiarity is a plus). Qualifications Bachelor’s/Master’s in Data Science, Computer Science, or related field. 1 - 3 years in data analysis, with hands-on experience in Power BI/Tableau/QuickSight. Portfolio of dashboards or GitHub projects (preferred) Show more Show less

Posted 2 days ago

Apply

0.0 - 7.0 years

0 Lacs

Delhi

On-site

Indeed logo

Job requisition ID :: 84245 Date: Jun 15, 2025 Location: Delhi Designation: Consultant Entity: What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice Work you’ll do As a Senior Consultant in our Consulting team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations. You’ll: We are seeking a highly skilled Senior AWS DevOps Engineer with 6-10 years of experience to lead the design, implementation, and optimization of AWS cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate will have in-depth expertise in Terraform, Docker, Kubernetes, and Big Data technologies such as Hadoop and Spark. You will be responsible for overseeing the end-to-end deployment process, ensuring the scalability, security, and performance of cloud systems, and mentoring junior engineers. Overview: We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 2 to 7 years Location- Bangalore, Chennai, Coimbatore, Delhi, Mumbai, Bhubaneswar. Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor’s degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions Big Data: Hadoop, Spark, Delta Lake Programming: Python, PySpark Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics ETL/ELT processes Data Lake architectures Version control: Github Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Senior Data Engineer – AWS Expert (Lead/Associate Architect Level) 📍 Location: Trivandrum or Kochi (On-site/Hybrid) Experience:10+ Years (Relevant exp in AWS- 5+ is mandatory) About the Role We’re hiring a Senior Data Engineer with deep expertise in AWS services, strong hands-on experience in data ingestion, quality, and API development, and the leadership skills to operate at a Lead or Associate Architect level. This role demands a high level of technical ownership, especially in architecting scalable, reliable data pipelines and robust API integrations. You’ll collaborate with cross-functional teams across geographies, so a willingness to work night shifts overlapping with US hours (till 10 AM IST) is essential. Key Responsibilities Data Engineering Leadership: Design and implement scalable, end-to-end data ingestion and processing frameworks using AWS. AWS Architecture: Hands-on development using AWS Glue, Lambda, EMR, Step Functions, S3, ECS, and other AWS services. Data Quality & Validation: Build automated checks, validation layers, and monitoring for ensuring data accuracy and integrity. API Development: Develop secure, high-performance REST APIs for internal and external data integration. Collaboration: Work closely with product, analytics, and DevOps teams across geographies. Participate in Agile ceremonies and CI/CD pipelines using tools like GitLab. What We’re Looking For Experience: 5+ years in Data Engineering, with a proven track record in designing scalable AWS-based data systems. Technical Mastery: Proficient in Python/PySpark, SQL, and building big data pipelines. AWS Expert: Deep knowledge of core AWS services used for data ingestion and processing. API Expertise: Experience designing and managing scalable APIs. Leadership Qualities: Ability to work independently, lead discussions, and drive technical decisions. Preferred Qualifications Experience with Kinesis, Firehose, SQS, and data lakehouse architectures. Exposure to tools like Apache Iceberg, Aurora, Redshift, and DynamoDB. Prior experience in distributed, multi-cluster environments. Working Hours US Time Zone Overlap Required: Must be available to work night shifts overlapping with US hours (up to 10:00 AM IST). Work Location Trivandrum or Kochi – On-site or hybrid options available for the right candidate. Show more Show less

Posted 2 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Data Testing Engineer Exp : 8+ years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Job Description : Develop, maintain, and execute test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. ● Test ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules ● Perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues. ● Develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts. ● Conduct testing in cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Snowflake), ensuring performance and scalability. ● Familiarity with ETL testing tools and frameworks (e.g., Informatica, Talend, dbt). ● Experience with scripting languages to automate data testing. ● Familiarity with data visualization tools like Tableau, Power BI, or Looker Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Positions Title : Data Engineer Experience Range : 4+ Years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Primary Skills Required: Kafka, Spark, Python, SQL, Shell Scripting, Databricks, Snowlflake, AWS and Azure Cloud What you will do: 1. Provide Expertise and Guidance as a Senior Experienced Engineer in solution design and strategy for Data Lake and analytics-oriented Data Operations. 2. Design, develop, and implement end-to-end data solutions (storage, integration, processing, access) in hypervendor platforms like AWS and Azure. 3. Architect and implement Integration, ETL, and data movement solutions using SQL Server Integration Services (SSIS)/ C#, AWS Glue, MSK and/or Confluent, and other COTS technologies. 4. Prepare documentation and designs for data solutions and applications. 5. Design and implement distributed analytics platforms for analyst teams. 6. Design and implement streaming solutions using Snowflake, Kafka and Confluent. 7. Migrate data from traditional relational database systems (ex. SQL Server, Postgres) to AWS relational databases such as Amazon RDS, Aurora, Redshift, DynamoDB, Cloudera, Snowflake, Databricks, etc. Who you are: 1. Bachelor's degree in Computer Science, Software Engineering. 2. 4+ Years of experience in the Data domain as an Engineer and Architect. 3. Demonstrated sense of ownership and accountability in delivering high-quality data solutions independently or with minimal handholding. 4. Ability to thrive in a dynamic environment, adapting to evolving requirements and challenges. 5. A solid understanding of AWS and Azure storage solutions such as S3, EFS, and EBS. 6. A solid understanding of AWS and Azure compute solutions such as EC2. 7. Experience implementing solutions on AWS and Azure relational databases such as MSSQL, SSIS, Amazon Redshift, RDS, and Aurora. 8. Experience implementing solutions leveraging ElastiCache and DynamoDB. 9. Experience designing and implementing Enterprise Data Warehouse, Data Marts/Lakes. 10. Experience with Star or Snowflake Schema. 11. Experience with R or Python and other emerging technologies in D&A. 12. Understanding of Slowly Changing Dimensions and Data Vault Model. AWS and Azure Certifications are preferred Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Specialist, Software Development Engineering JD We are looking for a seasoned Data Engineer for designing and developing Data Applications powering various Operational and Analytical use cases. This is a full-time position with career growth opportunities and a competitive benefits package. If you want to work with leading technology & Cloud and help financial institutions and businesses worldwide solve complex business challenges every day, this is the right opportunity for you. What does a successful Data Engineer do at Fiserv? You will be responsible for developing to Data solutions on Cloud along with Product enhancements and new features. As a hands-on engineer, you will be working in an Agile development model in developing and maintaining Data solutions including Data ingestion, Transformation, and reporting. What You Will Do Responsible to drive Data and Analytical application development and maintenance Design and develop highly efficient Data engineering pipelines and Database systems leveraging Oracle/Snowflake, AWS, Java Demonstrate building highly efficient and performing Data applications catering to Business with higher data accuracy and faster response. Optimize performance, fixes bugs to improve Data availability and Accuracy. Create Technical Design documents. Collaborate with multiple teams to provide technical knowhow, solutions to complex business problems. Develop reusable assets, Create knowledge repository. What You Will Need To Have 7+ years of experience in designing and deploying Enterprise-level Data Applications Strong development/technical skills in PL/SQL, Oracle, Shell Scripting, Java, Spring Boot Experience in Cloud platforms like AWS and having experience in one of the Cloud Databases like Snowflake or Redshift Strong understanding and development skill in Data Transformations and Aggregations What Would Be Great To Have Experience in Java, Restful APIs Experience in handling Real-time data loading using Kafka would be advantage. You stay focused - you want to ship software that solves real problems for real people, now. You’re a professional – you understand that it’s not enough to write working code. It must also be well-designed, easy to test, and easy to add to over time. You’re learning – no matter how much you know, you are always seeking to learn more and to become a better engineer and leader. Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Intern- Data Solutions As an Intern- Data Solutions , you will be part of the Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your Specific Responsibilities Will Include Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required Experience High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Understanding in creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Hands on with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Hands on with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Intern/Co-op (Fixed Term) Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 06/16/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R344334 Show more Show less

Posted 3 days ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Who we are...? REA India is a part of REA Group Ltd. of Australia (ASX: REA) (“REA Group”). It is the country’s leading full stack real estate technology platform that owns Housing.com and PropTiger.com. In December 2020, REA Group acquired a controlling stake in REA India. REA Group, headquartered in Melbourne, Australia, is a multinational digital advertising business specialising in property. It operates Australia’s leading residential and commercial property websites, realestate.com.au and realcommercial.com.au and owns leading portals in Hong Kong (squarefoot.com.hk) and China (myfun.com). REA Group also holds a significant minority shareholding in Move, Inc., operator of realtor.com in the US, and the PropertyGuru Group, operator of leading property sites in Malaysia, Singapore, Thailand, Vietnam and Indonesia. REA India is the only player in India that offers a full range of services in the real estate space, assisting consumers through their entire home seeking journey all the way from initial search and discovery to financing to the final step of transaction closure. It offers advertising and listings products to real estate developers, agents & homeowners, exclusive sales and marketing solutions to builders, data and content services, and personalized search, virtual viewing, site visits, negotiations, home loans and post- sales services to consumers for both buying and renting. With a 1600+ strong team, REA India has a national presence with 25+ offices across India with its corporate office located in Gurugram, Haryana. Housing.com Founded in 2012 and acquired by REA India in 2017, Housing.com is India’s most innovative real estate advertising platform for homeowners, landlords, developers, and real estate brokers. The company offers listings for new homes, resale homes, rentals, plots and co-living spaces in India. Backed by strong research and analytics, the company’s experts provide comprehensive real estate services that cover advertising and marketing, sales solutions for real estate developers, personalized search, virtual viewing, AR&VR content, home loans, end-to-end transaction services, and post-transaction services to consumers for both buying and renting. PropTiger.com PropTiger.com is among India’s leading digital real estate advisory firm offering a one-stop platform for buying residential real estate. Founded in 2011 with the goal to help people buy their dream homes, PropTiger.com leverages the power of information and the organisation’s deep rooted understanding of the real estate sector to bring simplicity, transparency and trust in the home buying process. PropTiger.com helps home-buyers through the entire home-buying process through a mix of technology-enabled tools as well as on-ground support. The company offers researched information about various localities and properties and provides guidance on matters pertaining to legal paperwork and loan assistance to successfully fulfil a transaction. Our Vision Changing the way India experiences property. Our Mission To be the first choice of our consumers and partners in discovering, renting, buying, selling, financing a home, and digitally enabling them throughout their journey. We do that with data, design, technology, and above all, the passion of our people while delivering value to our shareholders. Our Culture REA India being ranked 5th among the coveted list of India’s Best 100 Companies to Work For in 2024 by the Great Place to Work Institute®. REA India was also ranked among Top 5 workplaces list in 2023, the Top 25 workplaces list in 2022 and 2021, and the Top 50 workplaces list in 2019. Culture forms the core of our foundation and our effort towards creating an engaging workplace that has resulted Best WorkplaceTM in Building a Culture of Innovation by All in 2024 & 2023 and India’s Best In addition, REA India was also recognized as WorkplacesTM in Retail (e commerce category) for the fourth time in 2024. REA India is ranked 4th among Best Workplaces in Asia in 2023 and was ranked 55th in 2022, & 48th in 2021 apart from being recognized as Top 50 Best WorkplacesTM for Women in India in 2023 and 2021. REA India is also recognized as one of India's Top 50 Best Workplaces for Millennials in 2023 by Great Place to Work®. At REA India, we believe in creating a home for our people, where they feel a sense of belonging and purpose. By fostering a culture of inclusion and continuous learning and growth, every team member has the opportunity to thrive, embrace the spirit of being part of a global family, while contributing to revolutionize the way India experiences property. When you come to REA India, you truly COME HOME! REA India (Housing.com, PropTiger.com) is an equal opportunity employer and welcomes all qualified individuals to apply for employment. We are committed to creating an environment that is free from discrimination, harassment, and any other form of unlawful behavior. We value diversity and inclusion and do not discriminate against our people or applicants for employment based on age, color, gender, marital status, caste, religion, race, ethnic group, nationality, religious or political conviction, sexual orientation, gender identity, pregnancy, family responsibility, or disability or any other legally protected status. We firmly strive to eliminate any barriers that may impede equal opportunities while also recognizing that specific job roles may require appointees to possess the necessary qualifications, skills, abilities to perform essential functions of the position effectively. Our Tech Stack Java (Spring/Hibernate/JPA/REST), Ruby, Rails, Erlang, Python Javascript, NodeJS, AngularJS, Objective-C, React, Android AWS, Docker, Kubernetes, Microservices Architecture SaltStack, Ansible, Consul, Jenkins, Vault, Vagrant, VirtualBox, ELK Stack Varnish, Akamai, CloudFront, Apache, NginX, PWA, AMP Mysql, Aurora, Postgres, AWS RedShift, Mongo Redis, Aerospike, Memcache, ElasticSearch, Solr About the Role We are seeking a Head of Architecture to define and drive the end-to-end architecture strategy for REA India . This leadership role will focus on scalability, security, cloud optimization, and AI-driven innovation while mentoring teams and enhancing development efficiency. The role also requires collaborating with REA Group leaders to align with the global architectural strategy . Key Responsibilities Architectural Leadership Maintain Architectural Decision Records (ADR) to document key technical choices and their rationale Define and implement scalable, secure, and high-performance architectures across Housing and PropTiger Align technical decisions with business goals, leveraging microservices, distributed systems, and API-first design Cloud & DevOps Excellence Optimize cloud infrastructure (AWS/GCP) for cost, performance, and scalability Improve SEO performance by optimizing website architecture, performance, and indexing strategies Enhance CI/CD pipelines, automation, and Infrastructure as Code (IaC) to accelerate delivery Security & Compliance Establish and enforce security best practices for data protection, identity management, and compliance Strengthen security posture through proactive risk mitigation and governance Data & AI Strategy Architect data pipelines and AI-driven solutions to enable automation and data-driven decision-making Lead Generative AI initiatives to enhance product development and user experiences Incident Management & Operational Excellence Establish best practices for incident management, ensuring system reliability, rapid recovery, and root cause analysis Drive site reliability engineering (SRE) principles to improve uptime, observability, and performance monitoring Team Leadership & Mentorship Mentor engineering teams, fostering a culture of technical excellence, innovation, and continuous learning Collaborate with product and business leaders to align technology roadmaps with strategic objectives What We’re Looking For 12+ years in software architecture, cloud platforms (AWS/GCP), and large-scale system design Expertise in microservices, API design, DevOps, CI/CD, and cloud cost optimization Strong background in security best practices and governance Experience in Data Architecture, AI/ML pipelines, and Gen AI applications Proven leadership in mentoring and developing high-performing engineering teams Strong problem-solving, analytical, and cross-functional collaboration skills Why Join Us? Build and lead high-scale real estate tech products Drive cutting-edge AI and cloud innovations Mentor and shape the next generation of top engineering talent Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

I am thrilled to share an exciting opportunity with one of our esteemed clients! 🚀 Join me in exploring new horizons and unlocking potential. If you're ready for a challenge and growth,. Exp: 7+yrs Location: Chennai, Hyderabad Immediate joiner only, WFO Mandatory skills: SQL, Python, Pyspark, Databricks (strong in core databricks), AWS (AWS is mandate) JD: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Regards R Usha usha@livecjobs.com Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description About Oracle APAC ISV Business Oracle APAC ISV team is one of the fastest-growing and high-performing business units in APAC. We are a prime team that operates to serve a broad range of customers across the APAC region. ISVs are at the forefront of today's fastest-growing industries. Much of this growth stems from enterprises shifting toward adopting cloud-native ISV SaaS solutions. This transformation drives ISVs to evolve from traditional software vendors to SaaS service providers. Industry analysts predict exponential growth in the ISV market over the coming years, making it a key growth pillar for every hyperscaler. Our Cloud engineering team works on pitch-to-production scenarios of bringing ISVs’ solutions on the Oracle cloud (#oci) with an aim to provide a cloud platform for running their business which is better performant, more flexible, more secure, compliant to open-source technologies and offers multiple innovation options yet being most cost effective. The team walks along the path with our customers and are being regarded as a trusted techno-business advisors by them. Required Skills/Experience Your versatility and hands-on expertise will be your greatest asset as you deliver on time bound implementation work items and empower our customers to harness the full power of OCI. We also look for: Bachelor's degree in Computer Science, Information Technology, or a related field. Relevant certifications in database management, OCI or other cloud platforms (AWS, Azure, Google Cloud), or NoSQL databases 8+ years of professional work experience Proven experience in migrating databases and data to OCI or other cloud environments (AWS, Azure, Google Cloud, etc.). Expertise on Oracle DB and related technologies like RMAN, DataGuard, Advanced Security Options, MAA Hands on experience with NoSQL databases (MongoDB / Cassandra / DynamoDB, etc.) and other DBs like MySQL/PostgreSQL Demonstrable expertise in Data Management systems, caching systems and search engines such as MongoDB, Redshift, Snowflake, Spanner, Redis, ElasticSearch, as well as Graph databases like Neo4J An understanding of complex data integration, data pipelines and stream analytics using products like Apache Kafka, Oracle GoldenGate, Oracle Stream Analytics, Spark etc. Knowledge of how to deploy data management within a Kubernetes/docker environment as well as the corresponding management of state in microservice applications is a plus Ability to work independently and handle multiple tasks in a fast-paced environment. Solid experience managing multiple implementation projects simultaneously while maintaining high-quality standards. Ability to develop and manage project timelines, resources, and budgets. Career Level - IC4 Responsibilities What You’ll Do As a solution specialist, you will work closely with our cloud architects and key stakeholders of ISVs to propagate awareness and drive implementation of OCI native as well as open-source technologies by ISV customers. Lead and execute end-to-end data platforms migrations (including heterogeneous data platforms) to OCI. Design and implement database solutions within OCI, ensuring scalability, availability, and performance. Set up, configure, and secure production environments for data platforms in OCI Migrate databases from legacy systems or other Clouds to OCI while ensuring minimal downtime and data integrity. Implement and manage CDC solutions to track and capture changes in databases in real-time. Configure and manage CDC tools, ensuring low-latency, fault-tolerant data replication for high-volume environments. Assist with the creation of ETL/data pipelines for the migration of large datasets into data warehouse on OCI Configure and manage complex database deployment topologies, including clustering, replication, and failover configurations. Perform database tuning, monitoring, and optimization to ensure high performance in production environments. Implement automation scripts and tools to streamline database administration and migration processes. Develop and effectively present your proposed solution and execution plan to both internal and external stakeholders. Clearly explain the technical advantages of OCI based database management systems About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 3 days ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies