Home
Jobs
Companies
Resume

1964 Redshift Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

TEKsystems is seeking a Senior AWS + Data Engineer to join our dynamic team. The ideal candidate should have expertise Data engineer + Hadoop + Scala/Python with AWS services. This role involves designing, developing, and maintaining scalable and reliable software solutions. Job Title: Data Engineer – Spark/Scala (Batch Processing) Location: Manyata- Hybrid Experience: 7+yrs Type: Full-Time Mandatory Skills: 7-10 years’ experience in design, architecture or development in Analytics and Data Warehousing. Experience in building end-to-end solutions with the Big data platform, Spark or scala programming. 5 years of Solid experience in ETL pipeline building with spark or sclala programming framework with knowledge in developing UNIX Shell Script, Oracle SQL/ PL-SQL. Experience in Big data platform for ETL development with AWS cloud platform. Proficiency in AWS cloud services, specifically EC2, S3, Lambda, Athena, Kinesis, Redshift, Glue , EMR, DynamoDB, IAM, Secret Manager, Step functions, SQS, SNS, Cloud Watch. Excellent skills in Python-based framework development are mandatory. Should have experience with Oracle SQL database programming, SQL performance tuning, and relational model analysis. Extensive experience with Teradata data warehouses and Cloudera Hadoop. Proficient across Enterprise Analytics/BI/DW/ETL technologies such as Teradata Control Framework, Tableau, OBIEE, SAS, Apache Spark, Hive Analytics & BI Architecture appreciation and broad experience across all technology disciplines. Experience in working within a Data Delivery Life Cycle framework & Agile Methodology. Extensive experience in large enterprise environments handling large volume of datasets with High SLAs Good knowledge in developing UNIX scripts, Oracle SQL/PL-SQL, and Autosys JIL Scripts. Well versed in AI Powered Engineering tools like Cline, GitHub Copilo Please send the resumes to nvaseemuddin@teksystems.com or kebhat@teksystems.com Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra

Remote

Indeed logo

Solution Engineer - Data & AI Mumbai, Maharashtra, India Date posted Jun 16, 2025 Job number 1830869 Work site Up to 50% work from home Travel 25-50 % Role type Individual Contributor Profession Technology Sales Discipline Technology Specialists Employment type Full-Time Overview As a Data Platform Solution Engineer (SE), you will play a pivotal role in helping enterprises unlock the full potential of Microsoft’s cloud database and analytics stack across every stage of deployment. You’ll collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics, through hands-on engagements like Proof of Concepts, hackathons, and architecture workshops. This opportunity will allow you to accelerate your career growth, develop deep business acumen, hone your technical skills, and become adept at solution design and deployment. You’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform, all while enjoying flexible work opportunities. As a trusted technical advisor, you’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications 6+ years technical pre-sales, technical consulting, or technology delivery, or related experience OR equivalent experience 4+ years experience with cloud and hybrid, or on premises infrastructure, architecture designs, migrations, industry standards, and/or technology management Proficient on data warehouse & big data migration including on-prem appliance (Teradata, Netezza, Oracle), Hadoop (Cloudera, Hortonworks) and Azure Synapse Gen2. OR 5+ years technical pre-sales or technical consulting experience OR Bachelor's Degree in Computer Science, Information Technology, or related field AND 4+ years technical pre-sales or technical consulting experience OR Master's Degree in Computer Science, Information Technology, or related field AND 3+ year(s) technical pre-sales or technical consulting experience OR equivalent experience Expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) from migration & modernize and creating new AI apps. Expert on Azure Analytics (Fabric, Azure Databricks, Purview) and competitors (BigQuery, Redshift, Snowflake) in data warehouse, data lake, big data, analytics, real-time intelligent, and reporting using integrated Data Security & Governance. Proven ability to lead technical engagements (e.g., hackathons, PoCs, MVPs) that drive production-scale outcomes. Responsibilities Drive technical sales with decision makers using demos and PoCs to influence solution design and enable production deployments. Lead hands-on engagements—hackathons and architecture workshops—to accelerate adoption of Microsoft’s cloud platforms. Build trusted relationships with platform leads, co-designing secure, scalable architectures and solutions Resolve technical blockers and objections, collaborating with engineering to share insights and improve products. Maintain deep expertise in Analytics Portfolio: Microsoft Fabric (OneLake, DW, real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana

Remote

Indeed logo

Solution Engineer - Cloud & Data AI Gurgaon, Haryana, India Date posted Jun 16, 2025 Job number 1830866 Work site Up to 50% work from home Travel 25-50 % Role type Individual Contributor Profession Technology Sales Discipline Technology Specialists Employment type Full-Time Overview As a Data Platform Solution Engineer (SE), you will play a pivotal role in helping enterprises unlock the full potential of Microsoft’s cloud database and analytics stack across every stage of deployment. You’ll collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics, through hands-on engagements like Proof of Concepts, hackathons, and architecture workshops. This opportunity will allow you to accelerate your career growth, develop deep business acumen, hone your technical skills, and become adept at solution design and deployment. You’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform, all while enjoying flexible work opportunities. As a trusted technical advisor, you’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Preffered 6+ years technical pre-sales, technical consulting, or technology delivery, or related experience OR equivalent experience 4+ years experience with cloud and hybrid, or on premises infrastructure, architecture designs, migrations, industry standards, and/or technology management Proficient on data warehouse & big data migration including on-prem appliance (Teradata, Netezza, Oracle), Hadoop (Cloudera, Hortonworks) and Azure Synapse Gen2. Or 5+ years technical pre-sales or technical consulting experience OR Bachelor's Degree in Computer Science, Information Technology, or related field AND 4+ years technical pre-sales or technical consulting experience OR Master's Degree in Computer Science, Information Technology, or related field AND 3+ year(s) technical pre-sales or technical consulting experience OR equivalent experience Expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) from migration & modernize and creating new AI apps. Expert on Azure Analytics (Fabric, Azure Databricks, Purview) and other cloud products (BigQuery, Redshift, Snowflake) in data warehouse, data lake, big data, analytics, real-time intelligent, and reporting using integrated Data Security & Governance. Proven ability to lead technical engagements (e.g., hackathons, PoCs, MVPs) that drive production-scale outcomes. Responsibilities Drive technical conversations with decision makers using demos and PoCs to influence solution design and enable production deployments. Lead hands-on engagements—hackathons and architecture workshops—to accelerate adoption of Microsoft’s cloud platforms. Build trusted relationships with platform leads, co-designing secure, scalable architectures and solutions Resolve technical blockers and objections, collaborating with engineering to share insights and improve products. Maintain deep expertise in Analytics Portfolio: Microsoft Fabric (OneLake, DW, real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 day ago

Apply

0.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida, Uttar Pradesh, India Business Intelligence BOLD is seeking for QA professional who will work directly with the BI Development team to validate Business Intelligence solutions. He will build test strategy and test plans and test cases for ETL and Business Intelligence components. He will also validate SQL queries related to test cases and produce test summary reports. Job Description ABOUT THIS TEAM BOLD Business Intelligence(BI) team is a centralized team responsible for managing all aspects of the organization's BI strategy, projects and systems. BI team enables business leaders to make data-driven decisions by providing reports and analysis. The team is responsible for developing and manage a latency-free credible enterprise data warehouse which is a data source for decision making and input to various functions of the organization like Product, Finance, Marketing, Customer Support etc. BI team has four sub-components as Data analysis, ETL, Data Visualization and QA. It manages deliveries through Snowflake, Sisense and Microstrategy as main infrastructure solutions. Other technologies including Python, R, Airflow are also used in ETL, QA and data visualizations. WHAT YOU’LL DO Work with Business Analysts, BI Developers to translate Business requirements into Test Cases Responsible for validating the data sources, extraction of data, applying transformation logic, and loading the data in the target tables. Designing, documenting and executing test plans, test harness, test scenarios/scripts & test cases for manual, automated & bug tracking tools. WHAT YOU’LL NEED Experience in Data Warehousing / BI Testing, using any ETL and Reporting Tool Extensive experience in writing and troubleshooting SQL Queries using any of the Databases – Snowflake/ Redshift / SQL Server / Oracle Exposure to Data Warehousing and Dimensional Modelling Concepts Experience in understanding of ETL Source to Target Mapping Document Experience in testing the code on any of the ETL Tools Experience in Validating the Dashboards / Reports on any of the Reporting tools – Sisense / Tableau / SAP Business Objects / MicroStrategy Hands-on experience and strong understanding of Software Development Life Cycle (SDLC) and Software Testing Life Cycle (STLC). Good experience of Quality Assurance methodologies like Waterfall, V-Model, Agile, Scrum. Well versed with writing detailed test cases for functional & non-functional requirements. Experience on different types of testing that includes Black Box testing, Smoke testing, Functional testing, System Integration testing, End-to-End Testing, Regression testing & User Acceptance testing (UAT) & Involved in Load Testing, Performance Testing & Stress Testing. · Expertise in using TFS / JIRA / Excel for writing the Test Cases and tracking the Exposure in scripting languages like Python to create automated Test Scripts or Automated tools like Query Surge will be an added advantage An effective communicator with strong analytical abilities combined with skills to plan, implement & presentation of projects EXPERIENCE- Senior QA Engineer, BI: 4.5 years+ #LI-SV1 Benefits Outstanding Compensation Competitive salary Tax-friendly compensation structure Bi-annual bonus Annual Appraisal Equity in company 100% Full Health Benefits Group Mediclaim, personal accident, & term life insurance Group Mediclaim benefit (including parents' coverage) Practo Plus health membership for employees and family Personal accident and term life insurance coverage Flexible Time Away 24 days paid leaves Declared fixed holidays Paternity and maternity leave Compassionate and marriage leave Covid leave (up to 7 days) Additional Benefits Internet and home office reimbursement In-office catered lunch, meals, and snacks Certification policy Cab pick-up and drop-off facility About BOLD We Transform Work Lives As an established global organization, BOLD helps people find jobs. Our story is one of growth, success, and professional fulfillment. We create digital products that have empowered millions of people in 180 countries to build stronger resumes, cover letters, and CVs. The result of our work helps people interview confidently, finding the right job in less time. Our employees are experts, learners, contributors, and creatives. We Celebrate And Promote Diversity And Inclusion We value our position as an Equal Opportunity Employer. We hire based on qualifications, merit, and our business needs. We don't discriminate regarding race, color, religion, gender, pregnancy, national origin or citizenship, ancestry, age, physical or mental disability, veteran status, sexual orientation, gender identity or expression, marital status, genetic information, or any other applicable characteristic protected by law.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Kerala, India

On-site

Linkedin logo

Senior Data Engineer – AWS Expert (Lead/Associate Architect Level) 📍 Location: Trivandrum or Kochi (On-site/Hybrid) Experience:10+ Years (Relevant exp in AWS- 5+ is mandatory) About the Role We’re hiring a Senior Data Engineer with deep expertise in AWS services , strong hands-on experience in data ingestion, quality, and API development , and the leadership skills to operate at a Lead or Associate Architect level . This role demands a high level of technical ownership , especially in architecting scalable, reliable data pipelines and robust API integrations. You’ll collaborate with cross-functional teams across geographies, so a willingness to work night shifts overlapping with US hours (till 10 AM IST) is essential. Key Responsibilities Data Engineering Leadership : Design and implement scalable, end-to-end data ingestion and processing frameworks using AWS. AWS Architecture : Hands-on development using AWS Glue, Lambda, EMR, Step Functions, S3, ECS , and other AWS services. Data Quality & Validation : Build automated checks, validation layers, and monitoring for ensuring data accuracy and integrity. API Development : Develop secure, high-performance REST APIs for internal and external data integration. Collaboration : Work closely with product, analytics, and DevOps teams across geographies. Participate in Agile ceremonies and CI/CD pipelines using tools like GitLab. What We’re Looking For Experience : 5+ years in Data Engineering, with a proven track record in designing scalable AWS-based data systems. Technical Mastery : Proficient in Python/PySpark, SQL, and building big data pipelines. AWS Expert : Deep knowledge of core AWS services used for data ingestion and processing. API Expertise : Experience designing and managing scalable APIs. Leadership Qualities : Ability to work independently, lead discussions, and drive technical decisions. Preferred Qualifications Experience with Kinesis, Firehose, SQS , and data lakehouse architectures . Exposure to tools like Apache Iceberg , Aurora , Redshift , and DynamoDB . Prior experience in distributed, multi-cluster environments. Working Hours US Time Zone Overlap Required : Must be available to work night shifts overlapping with US hours (up to 10:00 AM IST). Work Location Trivandrum or Kochi – On-site or hybrid options available for the right candidate. Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Summary We are looking for an accomplished and dynamic Data Engineering Lead to join our team and drive the design, development, and delivery of cutting-edge data solutions. This role requires a balance of strong technical expertise, strategic leadership, and a consulting mindset. As the Lead Data Engineer, you will oversee the design and development of robust data pipelines and systems, manage and mentor a team of 5 to 7 engineers, and play a critical role in architecting innovative solutions tailored to client needs. You will lead by example, fostering a culture of accountability, ownership, and continuous improvement while delivering impactful, scalable data solutions in a fast-paced, consulting environment. Job Responsibilities Client Collaboration: · Act as the primary point of contact for US-based clients, ensuring alignment on project goals, timelines, and deliverables. · Engage with stakeholders to understand requirements and ensure alignment throughout the project lifecycle. · Present technical concepts and designs to both technical and non-technical audiences. · Communicate effectively with stakeholders to ensure alignment on project goals, timelines, and deliverables. · Set realistic expectations with clients and proactively address concerns or risks. Data Solution Design and Development: · Architect, design, and implement end-to-end data pipelines and systems that handle large-scale, complex datasets. · Ensure optimal system architecture for performance, scalability, and reliability. · Evaluate and integrate new technologies to enhance existing solutions. · Implement best practices in ETL/ELT processes, data integration, and data warehousing. Project Leadership and Delivery: · Lead technical project execution, ensuring timelines and deliverables are met with high quality. · Collaborate with cross-functional teams to align business goals with technical solutions. · Act as the primary point of contact for clients, translating business requirements into actionable technical strategies. Team Leadership and Development: · Manage, mentor, and grow a team of 5 to 7 data engineers; Ensure timely follow-ups on action items and maintain seamless communication across time zones. · Conduct code reviews, validations, and provide feedback to ensure adherence to technical standards. · Provide technical guidance and foster an environment of continuous learning, innovation, and collaboration. · Support collaboration and alignment between the client and delivery teams. Optimization and Performance Tuning: · Be hands-on in developing, testing, and documenting data pipelines and solutions as needed. · Analyze and optimize existing data workflows for performance and cost-efficiency. · Troubleshoot and resolve complex technical issues within data systems. Adaptability and Innovation: · Embrace a consulting mindset with the ability to quickly learn and adopt new tools, technologies, and frameworks. · Identify opportunities for innovation and implement cutting-edge technologies in data engineering. · Exhibit a "figure it out" attitude, taking ownership and accountability for challenges and solutions. Learning and Adaptability: · Stay updated with emerging data technologies, frameworks, and tools. · Actively explore and integrate new technologies to improve existing workflows and solutions. Internal Initiatives and Eminence Building: · Drive internal initiatives to improve processes, frameworks, and methodologies. · Contribute to the organization’s eminence by developing thought leadership, sharing best practices, and participating in knowledge-sharing activities. Essential Skills Qualifications Education: · Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. · Certifications in cloud platforms such as Snowflake Snowpro, Data Engineer is a plus. Experience: · 8+ years of experience in data engineering with hands-on expertise in data pipeline development, architecture, and system optimization · Demonstrated success in managing global teams, especially across US and India time zones. · Proven track record in leading data engineering teams and managing end-to-end project delivery. · Strong background in data warehousing and familiarity with tools such as Matillion, dbt, Striim, etc. Technical Skills: · Lead the design, development, and deployment of scalable data architectures, pipelines, and processes tailored to client needs · Expertise in programming languages such as Python, Scala, or Java. · Proficiency in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. · Solid understanding of database systems (relational and NoSQL) and data modeling techniques. · Hands-on experience of 2+ years in designing and developing data integration solutions using Matillion and/or dbt. · Strong knowledge of data engineering and integration frameworks. · Expertise in architecting data solutions. · Successfully implemented at least two end-to-end projects with multiple transformation layers. · Good grasp of coding standards, with the ability to define standards and testing strategies for projects. · Proficiency in working with cloud platforms (AWS, Azure, GCP) and associated data services. · Enthusiastic about working in Agile methodology. · Possess a comprehensive understanding of the DevOps process including GitHub integration and CI/CD pipelines. Soft Skills: · Exceptional problem-solving and analytical skills. · Strong communication and interpersonal skills to manage client relationships and team dynamics. · Ability to thrive in a consulting environment, quickly adapting to new challenges and domains. · Ability to handle ambiguity and proactively take ownership of challenges. · Demonstrated accountability, ownership, and a proactive approach to solving problems. Background Check required No criminal record Others · Bachelor’s or master’s degree in computer science, Engineering, or related field, or equivalent practical experience · There are 2-3 rounds in the interview process. · This is 5 days work from office role (No Hybrid/ Remote options available) Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Overview Job Description About Business Unit SaaSOps manages post-production support and the overall experience of Epsilon PeopleCloud products for our global clients. This function is responsible for product support, incident management, managed operations and the automation of processes. The team has successfully incubated and mainstreamed Site Reliability Engineering (SRE) as a practice, to ensure reliable product operations on a global scale. Plus, the team is actively pioneering the adoption of AI in operations (AIOps) and recently launched AI-driven self-service capabilities to enhance operational efficiency and improve client experiences. Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. Responsibilities What you will do: (Roles and responsibilities) Collaborate with other software developers, business analysts and software architects to plan, design, develop, test, and maintain web-based business applications built on Microsoft and other Similar frameworks and Technologies. Excellent skills and hands on experience in developing frontend applications along with middleware and backend Maintain high standards of software quality within the team by establishing best practices and processes. Ability to think creatively to push beyond the boundaries of existing practices and mindsets. Use knowledge to create new and improve existing processes in terms of design and performance. Package and support deployment of releases. Participate , plan and execute in team building activities fun activities. Qualifications Essential skills & experience: Bachelor’s degree in Computer Science or a related field or have equivalent experience. 3-4 years of experience in Software Engineering. Demonstrated experience driving delivery through strong delivery practices, across complex programs of work. Strong communication skills Must be detail-oriented and can manage multiple tasks simultaneously. Willingness to learn new skills and apply them for developing new-age applications. Experience with web development technologies and frameworks including .Net Framework, REST APIs, MVC Working knowledge of database technology such as SQL, Oracle , DynamoDB. Basic Oracle SQL and PL/SQL is a must. Proficiency in HTML, CSS, JavaScript, and J-query Unit Testing (NUnit). Cloud (AWS/Azure). Knowledge of version control tools like GitHub, VSTS etc. is a must. Agile Development, Dev Ops (CI/CD). C# and .Net Framework, Web APIs Debugging, Troubleshooting skills Drive things independently with minimal supervision. Development web APIs in JSON and troubleshooting them during Production issues Desirable Skills & Experience API testing through Postman or Ready API Responsive web (Bootstrap). Experience with Unix/Linux command-line and bash shell is good to have. Experience in AWS, Redshift or equivalent databases, Lambda functions, Snowflake DB types. Proficient in Unix Shell scripting and Python. Knowledge of AWS EC2, S3, AMI etc. Security scan and Vulnerabilities fix for web applications. Kibana tool validations and analysis Personal Attributes Professionalism and integrity Self-starter Excellent command of verbal and written English Well organized, with the ability to coordinate development across multiple team members Commitment to continuous learning and team/individual growth Ability to quickly adapt to changing tech landscape. Analysis and problem solving skill Additional Information Epsilon is a global data, technology and services company that powers the marketing and advertising ecosystem. For decades, we’ve provided marketers from the world’s leading brands the data, technology and services they need to engage consumers with 1 View, 1 Vision and 1 Voice. 1 View of their universe of potential buyers. 1 Vision for engaging each individual. And 1 Voice to harmonize engagement across paid, owned and earned channels. Epsilon’s comprehensive portfolio of capabilities across our suite of digital media, messaging and loyalty solutions bridge the divide between marketing and advertising technology. We process 400+ billion consumer actions each day using advanced AI and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Epsilon is a global company with more than 9,000 employees around the world. Epsilon Has a Core Set Of 5 Values That Define Our Culture And Guide Us To Create Value For Our Clients, Our People And Consumers. We Are Seeking Candidates That Align With Our Company Values, Demonstrate Them And Make Them Meaningful In Their Day-to-day Work Act with integrity. We are transparent and have the courage to do the right thing. Work together to win together. We believe collaboration is the catalyst that unlocks our full potential. Innovate with purpose. We shape the market with big ideas that drive big outcomes. Respect all voices. We embrace differences and foster a culture of connection and belonging. Empower with accountability. We trust each other to own and deliver on common goals. Because You Matter YOUniverse. A work-world with you at the heart of it! At Epsilon, we believe people make the place. And everything we do is designed with you in mind. That’s why our work-world, aptly named ‘YOUniverse’ is focused on creating a nurturing environment that elevates your growth, wellbeing and work-life harmony. So, come be part of a people-centric workspace where care for you is at the core of all we do. Take a trip to YOUniverse and explore our unique benefits, here. Epsilon is an Equal Opportunity Employer. Epsilon is committed to promoting diversity, inclusion, and equal employment opportunities by using reasonable efforts to attract, recruit, engage and retain qualified individuals of all ethnicities and backgrounds, including, but not limited to, women, people of color, LGBTQ individuals, people with disabilities and any other underrepresented groups, traits or characteristics. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Engineering Business Unit Overview The charter for Engineering group at Oportun is to be the world-class engineering force behind our innovative products. The group plays a vital role in designing, developing, and maintaining cutting-edge software solutions that power our mission and advance) our business. We strike a balance between leveraging leading tools and developing in-house solutions to create member experiences that empower their financial independence. The talented engineers in this group are dedicated to delivering and maintaining performant, elegant, and intuitive systems to our business partners and retail members. Our platform combines service-oriented platform features with sophisticated user experience and is enabled through a best-in-class (and fun to use!) automated development infrastructure. We prove that FinTech is more fun, more challenging, and in our case, more rewarding as we build technology that changes our members’ lives. Engineering at Oportun is responsible for high quality and scalable technical execution to achieve business goals and product vision. They ensure business continuity to members by effectively managing systems and services - overseeing technical architectures and system health. In addition, they are responsible for identifying and executing on the technical roadmap that enables product vision as well as fosters member & business growth in a scalable and efficient manner. The Enterprise Data and Technology (EDT) pillar within the Engineering Business Unit focusses on enabling wide use of corporate data assets whilst ensuring quality, availability and security across the data landscape. Position Overview As a Senior Data Engineer at Oportun, you will be a key member of our EDT team, responsible for designing, developing, and maintaining sophisticated software / data platforms in achieving the charter of the engineering group. Your mastery of a technical domain enables you to take up business problems and solve them with a technical solution. With your depth of expertise and leadership abilities, you will actively contribute to architectural decisions, mentor junior engineers, and collaborate closely with cross-functional teams to deliver high-quality, scalable software solutions that advance our impact in the market. This is a role where you will have the opportunity to take up responsibility in leading the technology effort – from technical requirements gathering to final successful delivery of the product - for large initiatives (cross functional and multi-month long projects). Responsibilities Data Architecture and Design: Lead the design and implementation of scalable, efficient, and robust data architectures to meet business needs and analytical requirements. Collaborate with stakeholders to understand data requirements, build subject matter expertise and define optimal data models and structures. Data Pipeline Development and Optimization: Design and develop data pipelines, ETL processes, and data integration solutions for ingesting, processing, and transforming large volumes of structured and unstructured data. Optimize data pipelines for performance, reliability, and scalability. Database Management and Optimization: Oversee the management and maintenance of databases, data warehouses, and data lakes to ensure high performance, data integrity, and security. Implement and manage ETL processes for efficient data loading and retrieval. Data Quality and Governance: Establish and enforce data quality standards, validation rules, and data governance practices to ensure data accuracy, consistency, and compliance with regulations. Drive initiatives to improve data quality and documentation of data assets. Mentorship and Leadership: Provide technical leadership and mentorship to junior team members, assisting in their skill development and growth. Lead and participate in code reviews, ensuring best practices and high-quality code. Collaboration and Stakeholder Management: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand their data needs and deliver solutions that meet those needs. Communicate effectively with non-technical stakeholders to translate technical concepts into actionable insights and business value. Performance Monitoring and Optimization: Implement monitoring systems and practices to track data pipeline performance, identify bottlenecks, and optimize for improved efficiency and scalability. Common Software Engineering Requirements You actively contribute to the end-to-end delivery of complex software applications, ensuring adherence to best practices and high overall quality standards. You have a strong understanding of a business or system domain with sufficient knowledge & expertise around the appropriate metrics and trends. You collaborate closely with product managers, designers, and fellow engineers to understand business needs and translate them into effective software solutions. You provide technical leadership and expertise, guiding the team in making sound architectural decisions and solving challenging technical problems. Your solutions anticipate scale, reliability, monitoring, integration, and extensibility. You conduct code reviews and provide constructive feedback to ensure code quality, performance, and maintainability. You mentor and coach junior engineers, fostering a culture of continuous learning, growth, and technical excellence within the team. You play a significant role in the ongoing evolution and refinement of current tools and applications used by the team, and drive adoption of new practices within your team. You take ownership of (customer) issues, including initial troubleshooting, identification of root cause and issue escalation or resolution, while maintaining the overall reliability and performance of our systems. You set the benchmark for responsiveness and ownership and overall accountability of engineering systems. You independently drive and lead multiple features, contribute to (a) large project(s) and lead smaller projects. You can orchestrate work that spans multiples engineers within your team and keep all relevant stakeholders informed. You support your lead/EM about your work and that of the team, that they need to share with the stakeholders, including escalation of issues Requirements Bachelor's or Master's degree in Computer Science, Data Science, or a related field. 5+ years of experience in data engineering, with a focus on data architecture, ETL, and database management. Proficiency in programming languages like Python/Pyspark and Java /Scala Expertise in big data technologies such as Hadoop, Spark, Kafka, etc. In-depth knowledge of SQL and experience with various database technologies (e.g., PostgreSQL, MySQL, NoSQL databases). Experience and expertise in building complex end-to-end data pipelines. Experience with orchestration and designing job schedules using the CICD tools like Jenkins and Airflow. Ability to work in an Agile environment (Scrum, Lean, Kanban, etc) Ability to mentor junior team members. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse). Strong leadership, problem-solving, and decision-making skills. Excellent communication and collaboration abilities. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Pune

On-site

Job Description Intern- Data Solutions As an Intern- Data Solutions , you will be part of the Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your specific responsibilities will include: Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education: B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required experience: High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Understanding in creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Hands on with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Hands on with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Intern/Co-op (Fixed Term) Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills: Job Posting End Date: 06/16/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R344334

Posted 2 days ago

Apply

10.0 years

0 Lacs

Noida

On-site

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role: Service Desk Manager ͏ Job Description: Java/Microservices/ Mainframe - Skills Backend Server-side Framework – Java 17 , Spring Boot, Spring MVC Microservice Architecture, Rest API Design Unit Testing Framework, Junit, Mockito Performance Tuning & Profiling – Visual VM, JMETER Logging, Troubleshooting – New Relic, Dynatrace, Kibana, Cloud Watch Database Relational database – Oracle, MSSQL, MySql, Postgres, DB2 Nosql database – MongoDB, Redis, DynamoDB Cloud Platform, Deployment and DevOps Cloud Platforms – AWS (ECS, EC2, Cloud front, Cloud Watch,S3, IAM, Route 53, ALB), Redshift, RDS DevOps – Docker containers, SonarQube, Jenkins, Git, Github, Github-actions, CI/CD pipelines, Terraform Mainframe COBOL JCL DB2 (Advance SQL skills) Alight COOL Tools: Xpeditor, File-Aid (Smart-File), DumbMaster (AbendAid) Omegamon iStrobe/strobe Roles & Responsibilities Ensure the architectural needs of the value stream are realized rather than those of just a channel or a team Strong knowledge of Design principles and implementing the same in designing and building the robust and scalable software solutions across the full stack having no/least adoption effort by clients Work with Product Management, Product Owners, and other value stream stakeholders to help ensure strategy and execution alignment Help manage risks, dependencies and troubleshoot and pro-actively solve problems Collaboration and review with other Architects and participating in Architecture COP and ARB Owning Performance & Non-Functional requirement including end to end ownership Acquire broader insight of the Product and Enterprise Architecture Spread broader knowledge with other engineers in the value stream Proven experience on stated engineering skills Aggregate program PI objectives into value stream PI objectives and publish them for visibility and transparency Assess the agility level of the program/value stream and help improve ͏ ͏ ͏ Competencies Client Centricity Collaborative Working Learning Agility Problem Solving & Decision Making Effective communication Mandatory Skills: Technology (Alight IT). Experience: >10 YEARS. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance Show more Show less

Posted 2 days ago

Apply

1.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description: Location: Indore, Noida, Pune and Bengaluru Qualifications: BE/B.Tech/MCA/M.Tech/M.Com in Computer Science or related field Required Skills: EDW Expertise: Hands-on experience with Teradata or Oracle. PL/SQL Proficiency: Strong ability to write complex queries. Performance Tuning: Expertise in optimizing queries to meet SLA requirements. Communication: Strong verbal and written communication skills. Experience Required (1-3 Years) Preferred Skills: Cloud Technologies: Working knowledge of AWS S3 and Redshift or equivalent. Database Migration: Familiarity with database migration processes. Big Data Tools: Understanding of SparkQL, and PySpark. Programming: Experience with Python for data processing and analytics. Data Management: Experience with import/export operations. Roles & Responsibilities Module Ownership: Manage a module and assist the team. Optimized PL/SQL Development: Write efficient queries. Performance Tuning: Improve database speed and efficiency. Requirement Analysis: Work with business users to refine needs. Application Development: Build solutions using complex SQL queries. Data Validation: Ensure integrity of large datasets (TB/PB). Testing & Debugging: Conduct unit testing and fix issues. Database Strategies: Apply best practices for development. Interested candidates can share their resumes at anubhav.pathania@impetus.com Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

We deliver the world’s most complex projects Work as part of a collaborative and inclusive team Enjoy a varied & challenging role Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role Develop and implement data pipelines for ingesting and collecting data from various sources into a centralized data platform. Develop and maintain ETL jobs using AWS Glue services to process and transform data at scale. Optimize and troubleshoot AWS Glue jobs for performance and reliability. Utilize Python and PySpark to efficiently handle large volumes of data during the ingestion process. Collaborate with data architects to design and implement data models that support business requirements. Create and maintain ETL processes using Airflow, Python and PySpark to move and transform data between different systems. Implement monitoring solutions to track data pipeline performance and proactively identify and address issues. Manage and optimize databases, both SQL and NoSQL, to support data storage and retrieval needs. Familiarity with Infrastructure as Code (IaC) tools like Terraform, AWS CDK and others. Proficiency in event-driven integrations, batch-based and API-led data integrations. Proficiency in CICD pipelines such as Azure DevOps, AWS pipelines or Github Actions. About You To be considered for this role it is envisaged you will possess the following attributes: Technical and Industry Experience: Independent Integration Developer with over 5+ years of experience in developing and delivering integration projects in an agile or waterfall-based project environment. Proficiency in Python, PySpark and SQL programming language for data manipulation and pipeline development Hands-on experience with AWS Glue, Airflow, Dynamo DB, Redshift, S3 buckets, Event-Grid, and other AWS services Experience implementing CI/CD pipelines, including data testing practices. Proficient in Swagger, JSON, XML, SOAP and REST based web service development Behaviors Required: Driven by our values and purpose in everything we do. Visible, active, hands on approach to help teams be successful. Strong proactive planning ability. Optimistic, energetic, problem solver, ability to see long term business outcomes. Collaborative, ability to listen, compromise to make progress. Stronger together mindset, with a focus on innovation & creation of tangible / realized value. Challenge status quo. Education – Qualifications, Accreditation, Training: Degree in Computer Science and/or related fields AWS Data Engineering certifications desirable Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Mumbai Job Digital Solutions Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jun 4, 2025 Unposting Date Jul 4, 2025 Reporting Manager Title Director Show more Show less

Posted 2 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Partner with Data Science, Product Manager, Analytics, and Business teams to review and gather the data/reporting/analytics requirements and build trusted and scalable data models, data extraction processes, and data applications to help answer complex questions. Design and implement data pipelines to ETL data from multiple sources into a central data warehouse. Design and implement real-time data processing pipelines using Apache Spark Streaming. Improve data quality by leveraging internal tools/frameworks to automatically detect and mitigate data quality issues. Develop and implement data governance procedures to ensure data security, privacy, and compliance. Implement new technologies to improve data processing and analysis. Coach and mentor junior data engineers to enhance their skills and foster a collaborative team environment. Qualifications A BE in Computer Science or equivalent with 8+ years of professional experience as a Data Engineer or in a similar role Experience building scalable data pipelines in Spark using Airflow scheduler/executor framework or similar scheduling tools. Experience with Databricks and its APIs. Experience with modern databases (Redshift, Dynamo DB, Mongo DB, Postgres or similar) and data lakes. Proficient in one or more programming languages such as Python/Scala and rock-solid SQL skills. Champion automated builds and deployments using CICD tools like Bitbucket, Git Experience working with large-scale, high-performance data processing systems (batch and streaming) Our perks & benefits Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more. About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less

Posted 2 days ago

Apply

4.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Location : Mumbai (Bandra) Experience : 4 to 6 years Work Model: Full-time | Roster-based (24/7 support model) Travel: Occasional (for cloud initiatives, conferences, or training) The candidate should have hands-on experience in developing and managing AWS infrastructure and DevOps setup , along with the ability to delegate and distribute tasks effectively within the team. Key Responsibilities Architect, deploy, automate, and manage AWS-based production systems. Ensure high availability, scalability, security, and reliability of cloud environments. Troubleshoot complex issues across multiple cloud applications and platforms. Design, implement, and maintain tools to automate operational processes. Provide operational support and engineering assistance for cloud-related issues and deployments. Lead platform security initiatives in collaboration with development and security teams. Define, maintain, and improve policies and standards for Infrastructure as Code (IaC) and CI/CD practices. Work closely with Product Owners and development teams to identify and deliver continuous improvements. Mentor junior team members, manage workloads, and ensure timely task execution. Required Technical Skills Cloud & AWS Services Strong hands-on experience with AWS core services including: Compute & Networking: EC2, VPC, VPN, EKS, Lambda Storage & CDN: S3, CloudFront Monitoring & Logging: CloudWatch Security: IAM, Secrets Manager, SNS Data & Analytics: Kinesis, RedShift, EMR DevOps Services: CodeCommit, CodeBuild, CodePipeline, ECR Other Services: Route53, AWS Organizations DevOps & Tooling Experience with tools like: CI/CD & IaC: Terraform, FluxCD Security & Monitoring: Prisma Cloud, SonarQube, Site24x7 API & Gateway Management: Kong Preferred Qualifications : AWS Certified Solutions Architect – Associate or Professional Experience in managing or mentoring cross-functional teams Exposure to multi-cloud or hybrid cloud architectures is a plus Show more Show less

Posted 2 days ago

Apply

5.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Company Description BEYOND SOFTWARES AND CONSULTANCY SERVICES Pvt. Ltd. (BSC Services) is committed to delivering innovative solutions to meet clients' evolving needs, particularly in the telecommunication industry. We provide a variety of software solutions for billing and customer management, network optimization, and data analytics. Our skilled team of software developers and telecom specialists collaborates closely with clients to understand their specific requirements and deliver high-quality, secure software solutions. We strive to build long-term relationships based on trust, transparency, and open communication, ensuring our clients stay competitive and grow in a dynamic market. Role Description We are looking for a Data Engineer with expertise in DBT and Airflow for a full-time remote position. The Data Engineer will be responsible for designing, developing, and managing data pipelines and ETL processes. Day-to-day tasks include data modeling, data warehousing, and implementing data analytics solutions. The role involves collaborating with cross-functional teams to ensure data integrity and optimize data workflows. Must Have Skills: 5 to 10 years IT Experience in data transformation in Amazon RedShift- Datawarehouse using Apache Airflow, Data Build Tool and Cosmos. Hands-on experience working in complex data warehouse implementations. Expert in Advance SQL. The Data Engineer will be responsible for designing, developing, testing and maintaining data pipelines using AWS RedShift, DBT, and Airflow. Experienced in Data Analytical skills. Minimum 5 years of Hands-on-Experience in Amazon RedShift Datawarehouse. Experience of dbt (Data Build Tool) for data transformation. Experience in developing, scheduling & monitoring workflow orchestration using Apache Airflow. Experienced in Astro & Cosmos library. Experience in construction of the DAG in Airflow. Experience of DevOps: BitBucket or Experience of Github /Gitlab Minimum 5 years of experience in Data Transformation projects . Development of data ingestion pipelines and robust ETL frameworks. Strong hands-on experience in analysing data on large datasets. Extensive experience in dimensional data modelling includes complex entity relationships and historical data entities. Implementation of data cleansing and data quality features in ETL pipelines. Implementation of data streaming solutions from different sources for data migration & transformation. · Extensive Data Engineering experience using Python. Experience in SQL and Performance Tuning. Hands on experience parsing responses generated by API's (REST/XML/JSON). Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Company Description ThreatXIntel is a startup cyber security company specializing in cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We offer customized, affordable solutions tailored to meet the specific needs of businesses of all sizes. Our proactive approach to security involves continuous monitoring and testing to identify vulnerabilities before they can be exploited. Role Description We are looking for a skilled freelance Data Engineer with expertise in PySpark and AWS data services , particularly S3 and Redshift . Familiarity with Salesforce data integration is a plus. This role focuses on building scalable data pipelines and supporting analytics use cases in a cloud-native environment. Key Responsibilities Design and develop ETL/ELT data pipelines using PySpark for large-scale data processing Ingest, transform, and store data across AWS S3 (data lake) and Amazon Redshift (data warehouse) Integrate data from Salesforce into the cloud data ecosystem for analysis Optimize data workflows for performance and cost-efficiency Write efficient code and queries for structured and unstructured data Collaborate with analysts and stakeholders to deliver clean, usable datasets Required Skills Strong hands-on experience with PySpark Proficient in AWS services, especially S3 and Redshift Basic working knowledge of Salesforce data structure or API Ability to write complex SQL for data transformation and reporting Familiarity with version control and Agile collaboration tools Good communication and documentation skills Show more Show less

Posted 2 days ago

Apply

1.0 - 3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Summary We are hiring a Data Analyst to turn complex data into actionable insights using Power BI, Tableau, QuickSight, and SQL. You’ll collaborate with teams across the organization to design dashboards, optimize data pipelines, and drive data literacy. This is an onsite role in Noida, ideal for problem-solvers passionate about data storytelling. Key Responsibilities Dashboard & Reporting: Develop interactive dashboards in Power BI, Tableau, and QuickSight with drill-down capabilities. Automate reports and ensure real-time data accuracy. Data Analysis & SQL: Write advanced SQL queries (window functions, query optimization) for large datasets. Perform root-cause analysis on data discrepancies. ETL & Data Pipelines: Clean, transform, and model data using ETL tools (e.g. Python scripts). Work with cloud data warehouses (Snowflake, Redshift, BigQuery). Stakeholder Collaboration: Translate business requirements into technical specs. Train non-technical teams on self-service analytics tools. Performance Optimization: Improve dashboard load times and SQL query efficiency. Implement data governance best practices. Technical Skills Must-Have: ✔ BI Tools: Power BI (DAX, Power Query), Tableau (LODs, parameters), QuickSight (SPICE, ML insights) ✔ SQL: Advanced querying, indexing, stored procedures ✔ Data Modeling: Star schema, normalization, performance tuning ✔ Excel/Sheets: PivotTables, VLOOKUP, Power Query Nice-to-Have: ☑ Programming: Python/R (Pandas, NumPy) for automation ☑ Cloud Platforms: AWS (QuickSight, S3), Azure (Synapse), GCP ☑ Version Control: Git, GitHub Soft Skills Strong communication to explain insights to non-technical teams. Curiosity to explore data anomalies and trends. Project management (Agile/Scrum familiarity is a plus). Qualifications Bachelor’s/Master’s in Data Science, Computer Science, or related field. 1 - 3 years in data analysis, with hands-on experience in Power BI/Tableau/QuickSight. Portfolio of dashboards or GitHub projects (preferred) Show more Show less

Posted 2 days ago

Apply

0.0 - 7.0 years

0 Lacs

Delhi

On-site

Indeed logo

Job requisition ID :: 84245 Date: Jun 15, 2025 Location: Delhi Designation: Consultant Entity: What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice Work you’ll do As a Senior Consultant in our Consulting team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations. You’ll: We are seeking a highly skilled Senior AWS DevOps Engineer with 6-10 years of experience to lead the design, implementation, and optimization of AWS cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate will have in-depth expertise in Terraform, Docker, Kubernetes, and Big Data technologies such as Hadoop and Spark. You will be responsible for overseeing the end-to-end deployment process, ensuring the scalability, security, and performance of cloud systems, and mentoring junior engineers. Overview: We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 2 to 7 years Location- Bangalore, Chennai, Coimbatore, Delhi, Mumbai, Bhubaneswar. Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor’s degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions Big Data: Hadoop, Spark, Delta Lake Programming: Python, PySpark Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics ETL/ELT processes Data Lake architectures Version control: Github Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Senior Data Engineer – AWS Expert (Lead/Associate Architect Level) 📍 Location: Trivandrum or Kochi (On-site/Hybrid) Experience:10+ Years (Relevant exp in AWS- 5+ is mandatory) About the Role We’re hiring a Senior Data Engineer with deep expertise in AWS services, strong hands-on experience in data ingestion, quality, and API development, and the leadership skills to operate at a Lead or Associate Architect level. This role demands a high level of technical ownership, especially in architecting scalable, reliable data pipelines and robust API integrations. You’ll collaborate with cross-functional teams across geographies, so a willingness to work night shifts overlapping with US hours (till 10 AM IST) is essential. Key Responsibilities Data Engineering Leadership: Design and implement scalable, end-to-end data ingestion and processing frameworks using AWS. AWS Architecture: Hands-on development using AWS Glue, Lambda, EMR, Step Functions, S3, ECS, and other AWS services. Data Quality & Validation: Build automated checks, validation layers, and monitoring for ensuring data accuracy and integrity. API Development: Develop secure, high-performance REST APIs for internal and external data integration. Collaboration: Work closely with product, analytics, and DevOps teams across geographies. Participate in Agile ceremonies and CI/CD pipelines using tools like GitLab. What We’re Looking For Experience: 5+ years in Data Engineering, with a proven track record in designing scalable AWS-based data systems. Technical Mastery: Proficient in Python/PySpark, SQL, and building big data pipelines. AWS Expert: Deep knowledge of core AWS services used for data ingestion and processing. API Expertise: Experience designing and managing scalable APIs. Leadership Qualities: Ability to work independently, lead discussions, and drive technical decisions. Preferred Qualifications Experience with Kinesis, Firehose, SQS, and data lakehouse architectures. Exposure to tools like Apache Iceberg, Aurora, Redshift, and DynamoDB. Prior experience in distributed, multi-cluster environments. Working Hours US Time Zone Overlap Required: Must be available to work night shifts overlapping with US hours (up to 10:00 AM IST). Work Location Trivandrum or Kochi – On-site or hybrid options available for the right candidate. Show more Show less

Posted 2 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Data Testing Engineer Exp : 8+ years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Job Description : Develop, maintain, and execute test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. ● Test ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules ● Perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues. ● Develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts. ● Conduct testing in cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Snowflake), ensuring performance and scalability. ● Familiarity with ETL testing tools and frameworks (e.g., Informatica, Talend, dbt). ● Experience with scripting languages to automate data testing. ● Familiarity with data visualization tools like Tableau, Power BI, or Looker Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Positions Title : Data Engineer Experience Range : 4+ Years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Primary Skills Required: Kafka, Spark, Python, SQL, Shell Scripting, Databricks, Snowlflake, AWS and Azure Cloud What you will do: 1. Provide Expertise and Guidance as a Senior Experienced Engineer in solution design and strategy for Data Lake and analytics-oriented Data Operations. 2. Design, develop, and implement end-to-end data solutions (storage, integration, processing, access) in hypervendor platforms like AWS and Azure. 3. Architect and implement Integration, ETL, and data movement solutions using SQL Server Integration Services (SSIS)/ C#, AWS Glue, MSK and/or Confluent, and other COTS technologies. 4. Prepare documentation and designs for data solutions and applications. 5. Design and implement distributed analytics platforms for analyst teams. 6. Design and implement streaming solutions using Snowflake, Kafka and Confluent. 7. Migrate data from traditional relational database systems (ex. SQL Server, Postgres) to AWS relational databases such as Amazon RDS, Aurora, Redshift, DynamoDB, Cloudera, Snowflake, Databricks, etc. Who you are: 1. Bachelor's degree in Computer Science, Software Engineering. 2. 4+ Years of experience in the Data domain as an Engineer and Architect. 3. Demonstrated sense of ownership and accountability in delivering high-quality data solutions independently or with minimal handholding. 4. Ability to thrive in a dynamic environment, adapting to evolving requirements and challenges. 5. A solid understanding of AWS and Azure storage solutions such as S3, EFS, and EBS. 6. A solid understanding of AWS and Azure compute solutions such as EC2. 7. Experience implementing solutions on AWS and Azure relational databases such as MSSQL, SSIS, Amazon Redshift, RDS, and Aurora. 8. Experience implementing solutions leveraging ElastiCache and DynamoDB. 9. Experience designing and implementing Enterprise Data Warehouse, Data Marts/Lakes. 10. Experience with Star or Snowflake Schema. 11. Experience with R or Python and other emerging technologies in D&A. 12. Understanding of Slowly Changing Dimensions and Data Vault Model. AWS and Azure Certifications are preferred Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies