Jobs
Interviews

3632 Redshift Jobs - Page 50

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2936655

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2900027

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2802226

Posted 4 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description As a Data Engineer Intern you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Job Locations-By applying to this position your application will be considered for all locations we hire for in India. This includes but is not limited to Bengaluru, Chennai, Hyderabad, Delhi and Pune. Please note that Amazon internships require full-time commitment during the duration of the internship. During the course of internship, interns should not have any conflicts including but not limited to academic projects, classes or other internships/employment. Any exam related details must be shared with the hiring manager to plan for absence during those days. Specific team norms around working hours will be communicated by the hiring/ reporting manager at the time of commencement of internship. Candidates receiving internship will be required to submit declaration of their availability to complete the entire duration of internship duly signed by a competent authority at their University. Internship offer will be subjected to successful submission of the declaration. Key job responsibilities Job Responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications Candidates must be in their Final year of Bachelor or Master Course in Computer Science or Engineering or related field Hands-on experience in SQL Hands on experience in languages like Python etc. Knowledge of RDBMS, Big Data, NOSQL, ETL and Data-warehousing Concepts Candidate must have good written and oral communication skills, be a fast learner and have the ability to adapt quickly to a fast-paced development environment Preferred Qualifications Hands on Experience with AWS technologies like Amazon S3, EMR, Amazon RDS, Amazon Redshift etc. Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations Knowledge of cloud services such as AWS or equivalent Knowledge on different reporting/visualization tools in the industry Sharp problem solving skills and ability to resolve ambiguous requirements Basic Qualifications Are enrolled in or have completed a Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2802226

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: BI Engineer – Amazon QuickSight Developer Job Summary We are seeking an experienced Amazon QuickSight Developer to join our BI team. This role requires deep expertise in designing and deploying intuitive, high-impact dashboards and managing all aspects of QuickSight administration. You’ll collaborate closely with data engineers and business stakeholders to create scalable BI solutions that empower data-driven decisions across the organization. Key Responsibilities Dashboard Development & Visualization Design, develop, and maintain interactive QuickSight dashboards using advanced visuals, parameters, and controls. Create reusable datasets and calculated fields using both SPICE and Direct Query modes. Implement advanced analytics such as level-aware calculations, ranking, period-over-period comparisons, and custom KPIs. Build dynamic, user-driven dashboards with multi-select filters, dropdowns, and custom date ranges. Optimize performance and usability to maximize business value and user engagement. QuickSight Administration Manage users, groups, and permissions through QuickSight and AWS IAM roles. Implement and maintain row-level security (RLS) to ensure appropriate data access. Monitor usage, SPICE capacity, and subscription resources to maintain system performance. Configure and maintain themes, namespaces, and user interfaces for consistent experiences. Work with IT/cloud teams on account-level settings and AWS integrations. Collaboration & Data Integration Partner with data engineers and analysts to understand data structures and business needs. Integrate QuickSight with AWS services such as Redshift, Athena, S3, and Glue. Ensure data quality and accuracy through robust data modeling and SQL optimization. Required Skills & Qualifications 3+ years of hands-on experience with Amazon QuickSight (development and administration). Strong SQL skills and experience working with large, complex datasets. Expert-level understanding of QuickSight security, RLS, SPICE management, and user/group administration. Strong sense of data visualization best practices and UX design principles. Proficiency with AWS data services including Redshift, Athena, S3, Glue, and IAM. Solid understanding of data modeling and business reporting frameworks. Nice To Have Experience with Python, AWS Lambda, or automating QuickSight administration via SDK or CLI. Familiarity with modern data stack tools (e.g., dbt, Snowflake, Tableau, Power BI). Apply Now If you’re passionate about building scalable BI solutions and making data, come alive through visualization, we’d love to hear from you!

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Saarthee: Saarthee is a Global Strategy, Analytics, Technology and AI consulting company, where our passion for helping others fuels our approach and our products and solutions. We are a onestop shop for all things data and analytics. Unlike other analytics consulting firms that are technology or platform specific, Saarthee’s holistic and tool agnostic approach is unique in the marketplace. Our Consulting Value Chain framework meets our customers where they are in their data journey. Our diverse and global team work with one objective in mind: Our Customers’ Success. At Saarthee, we are passionate about guiding organizations towards insights fueled success. That’s why we call ourselves Saarthee–inspired by the Sanskrit word ‘Saarthi’, which means charioteer, trusted guide, or companion. Cofounded in 2015 by Mrinal Prasad and Shikha Miglani, Saarthee already encompasses all the components of Data Analytics consulting. Saarthee is based out of Philadelphia, USA with office in UK and India We are seeking a talented Talent Acquisition Executive/Lead. The ideal candidate will be responsible for driving talent acquisition strategies to support our company's growth objectives. You will work closely with the HR department, business leaders, and hiring managers to identify, attract, and hire top talent in the industry. If you are passionate about building high-performing teams and have a proven track record of sourcing, hiring, and retaining top talent in the data analytics industry and related, we encourage you to apply for this exciting opportunity. Key Responsibilities: Technical Talent Acquisition: Lead the end-to-end recruitment process for roles in Data Engineering, Data Science, and Data Analytics, including software engineers, data scientists, machine learning engineers, and data architects. Utilize your technical expertise to assess candidates' proficiency in programming languages (Python, Java, Scala), data pipelines (ETL, Kafka), cloud platforms (AWS, Azure, GCP), and big data technologies (Hadoop, Spark). Technical Screening & Assessment: Design and implement rigorous technical assessment processes, including coding tests, algorithm challenges, and system design interviews, to ensure candidates meet the high technical standards required for our projects. Stakeholder Collaboration: Partner with CTO, Engineering Leads, and Data Science teams to understand the specific technical requirements of each role. Translate these needs into effective job descriptions, recruitment strategies, and candidate evaluation criteria. Pipeline Development: Build and maintain a robust pipeline of highly qualified candidates by leveraging networks, industry events, online platforms (GitHub, Stack Overflow), and advanced sourcing techniques such as Boolean search, AI-driven talent matching, and targeted outreach. Industry Expertise: Stay current with trends in Data Engineering, Data Science, and Analytics, including advancements in AI/ML, data warehousing (Snowflake, Redshift), real-time analytics, and DevOps practices. Use this knowledge to proactively identify and engage with potential candidates who are at the forefront of these fields. Diversity & Inclusion in Tech: Implement strategies to ensure diverse and inclusive hiring practices, focusing on underrepresented groups in technology. Develop partnerships with organizations and communities that support diversity in tech. Talent Development & Retention: Work with technical leadership to create clear career pathways for data and engineering professionals within the company. Support ongoing training and development initiatives to keep teams updated with the latest technologies and methodologies. Qualifications: Experience: 3+ years in Talent Acquisition, with significant experience recruiting for Data Engineering, Data Science, Data Analytics, and Technology roles in high-growth or technically complex environments. Technical Knowledge: Strong background in the technologies and tools used in Data Engineering, Data Science, and Data Analytics, including but not limited to: AI-ML Programming languages: Python, R, Java, Scala Big Data technologies: Hadoop, Spark, Kafka Cloud platforms: AWS, Azure, GCP Data processing: ETL, data pipelines, real-time streaming Analytics and BI tools: Tableau, Power BI, Looker Leadership: Proven experience in building teams with a focus on technical roles, driving strategies that result in successful, high-impact hires. Analytical & Data-Driven: Expertise in using data to guide recruitment decisions and strategies, including metrics on sourcing, pipeline health, and hiring efficiency. Communication: Excellent ability to communicate complex technical requirements to both technical and non-technical stakeholders. Commitment to Excellence: A relentless focus on quality, with a keen eye for identifying top technical talent who can thrive in challenging, innovative environments. Soft Skills: Problem-Solving: Strong analytical and troubleshooting skills. Collaboration: Excellent teamwork and communication skills to work effectively with cross-functional teams. Adaptability: Ability to manage multiple tasks and projects in a fast-paced environment. Attention to Detail: Precision in diagnosing and fixing issues. Continuous Learning: A proactive attitude towards learning new technologies and improving existing skills Excellent Verbal and Writing skills.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer II at JPMorgan Chase within the Consumer Banking-Trust & Security, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Work with Architect, Machine Learning engineers and Data engineers to identify technical and functional needs of data systems Ensure adherence to defined development life cycle, Software design practices, and Architecture strategy and intent Contribute to application frameworks in support of greater resiliency and self-healing capabilities Contribute to monitoring frameworks to accomplish end to end flow monitoring and noiseless alerting with proper telemetry Implement performance tests, identify bottlenecks, opportunities for optimization and continuous improvements Participate in deep design reviews with application and platform teams throughout the life cycle to help develop software for reliability, speed and scale Design and Develop distributed computation and parallel processing components to support high volume data pipelines Support DevOps and CI/CD processes Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 2+ years applied experience Advanced knowledge of application, data and infrastructure architecture disciplines Experience in a Big Data technologies (Impala, Hive, Redshift, Kafka, etc.) Experience in Spark processing large amount of data Experience in Java/Python/SQL Development Expertise in AWS stack designing, coding, testing, and delivering solution that supports high data volume Experience with Spring Boot building Microservice and/or Web Apps Advanced knowledge of one or more infrastructure components (e.g. containerization - docker, k8s) Experience in end-to-end systems automation and orchestration Experience with DevOps toolchains Strong debugging and troubleshooting skills Preferred Qualifications, Capabilities, And Skills Good understanding of SDLC and ITIL practices. Knowledge of industry-wide technology trends and best practices.

Posted 1 month ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Syren Cloud Syren Cloud Technologies is a cutting-edge company specializing in supply chain solutions and data engineering. Their intelligent insights, powered by technologies like AI and NLP, empower organizations with real-time visibility and proactive decision-making. From control towers to agile inventory management, Syren unlocks unparalleled success in supply chain management. Role Summary An Azure Data Architect is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders’ needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions. Job Responsibilities Act as subject matter expert providing best-practice guidance on data lake and ETL architecture frameworks suitable for handling big data for unstructured and structured information Drive business and Service Layer development with the customer by finding new opportunities based on expanding existing solutions creating new ones Provide hands-on subject matter expertise to build and implement Azure-based Big Data solution Research, evaluate, architect, and deploy new tools, frameworks, and patterns to build sustainable Big Data platforms for our clients Facilitate and/or conduct requirements workshops Responsible for collaborating on the prioritization of technical requirement Collaborates with peer teams and vendors on the solution and delivery Has overall accountability for project delivery Works collaboratively with the Product Management, Data Management, and other Architects to deliver for the cloud data platform, Data as a service Consults with clients to assess current problem states, define desired future states, define solution architecture and make solutions recommendations Job Requirements Degree in computer science or equivalent preferred Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications 12+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques. Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing Designing and building distributed systems capable of processing massive data volumes Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP Experience with debugging and performance tuning in distributed environments Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy Experience dealing with structured, unstructured data. Must have Python, PySpark experience. EDW experience required – Azure Databricks, RedShift, Azure Synapse, etc Experience in ML or/and graph analysis is a plus Mandatory Skills: Azure Data Engineering, Strong programming skills with Azure Data Bricks & Pyspark

Posted 1 month ago

Apply

1.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description The role is for 1 year term in Amazon Job Description Are you interested in applying your strong quantitative analysis and big data skills to world-changing problems? Are you interested in driving the development of methods, models and systems for strategy planning, transportation and fulfillment network? If so, then this is the job for you. Our team is responsible for creating core analytics tech capabilities, platforms development and data engineering. We develop scalable analytics applications across APAC, MENA and LATAM. We standardize and optimize data sources and visualization efforts across geographies, builds up and maintains the online BI services and data mart. You will work with professional software development managers, data engineers, business intelligence engineers and product managers using rigorous quantitative approaches to ensure high quality data tech products for our customers around the world, including India, Australia, Brazil, Mexico, Singapore and Middle East. Amazon is growing rapidly and because we are driven by faster delivery to customers, a more efficient supply chain network, and lower cost of operations, our main focus is in the development of strategic models and automation tools fed by our massive amounts of available data. You will be responsible for building these models/tools that improve the economics of Amazon’s worldwide fulfillment networks in emerging countries as Amazon increases the speed and decreases the cost to deliver products to customers. You will identify and evaluate opportunities to reduce variable costs by improving fulfillment center processes, transportation operations and scheduling, and the execution to operational plans. Major Responsibilities Include Translating business questions and concerns into specific analytical questions that can be answered with available data using BI tools; produce the required data when it is not available. Writing SQL queries and automation scripts Ensure data quality throughout all stages of acquisition and processing, including such areas as data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. Communicate proposals and results in a clear manner backed by data and coupled with actionable conclusions to drive business decisions. Collaborate with colleagues from multidisciplinary science, engineering and business backgrounds. Develop efficient data querying and modeling infrastructure. Manage your own process. Prioritize and execute on high impact projects, triage external requests, and ensure to deliver projects in time. Utilizing code (SQL, Python, R, Scala, etc.) for analyzing data and building data marts Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Telangana Job ID: A3005884

Posted 1 month ago

Apply

5.0 years

4 - 9 Lacs

Gurgaon

On-site

Location Gurugram, India Employment Type Full time Department Engineering About us: SentiLink provides innovative identity and risk solutions, empowering institutions and individuals to transact confidently with one another. By building the future of identity verification in the United States and reinventing the currently clunky, ineffective, and expensive process, we believe strongly that the future will be 10x better. We’ve had tremendous traction and are growing extremely quickly. Already our real-time APIs have helped verify hundreds of millions of identities, beginning with financial services. In 2021, we raised a $70M Series B round, led by Craft Ventures to rapidly scale our best in class products. We’ve earned coverage and awards from TechCrunch, CNBC, Bloomberg, Forbes, Business Insider, PYMNTS, American Banker, LendIt, and have been named to the Forbes Fintech 50 list consecutively since 2023. Last but not least, we’ve even been a part of history - we were the first company to go live with the eCBSV and testified before the United States House of Representatives. About the Role Are you passionate about creating world-class solutions that fuel product stability and continuously improve infrastructure operations? We’re looking for a driven Infrastructure Engineer to architect, implement, and maintain powerful observability systems that safeguard the performance and reliability of our most critical systems. In this role, you’ll take real ownership—collaborating with cross-functional teams to shape best-in-class observability standards, troubleshoot complex issues, and fine-tune monitoring tools to exceed SLA requirements. If you’re ready to design high-quality solutions, influence our technology roadmap, and make a lasting impact on our product’s success, we want to meet you! Responsibilities: Improve alerting across SentiLink systems and services, developing high quality monitoring capabilities while actively reducing false positives. Troubleshoot, debug, and resolve infrastructure issues as they arise; participate in on-call rotations for production issues. Define and refine Service Level Indicators (SLI), Service Level Objectives (SLO), and Service Level Agreements (SLA) in collaboration with product and engineering teams. Develop monitoring and alerting configurations using IaC solutions such as Terraform. Build and maintain dashboards to provide visibility into system performance and reliability. Collaborate with engineering teams to improve root cause analysis processes and reduce Mean Time to Recovery (MTTR). Drive cost optimization for observability tools like Datadog, CloudWatch, and Sumo Logic. Perform capacity testing to determine a deep understanding of infrastructure performance under load. Develop alerting based on learnings. Oversee, develop, and operate Kubernetes and service mesh infrastructure, ensuring smooth performance and reliability Investigate operational alerts, identify root causes, and compile comprehensive root cause analysis reports. Pursue action items relentlessly until they are thoroughly completed Conduct in-depth examinations of database operational issues, actively developing and improving database architecture, schema, and configuration for enhanced performance and reliability Develop and maintain incident response runbooks and improve processes to minimize service downtime. Research and evaluate new observability tools and technologies to enhance system monitoring. Requirements: 5 years of experience in cloud infrastructure, DevOps, or systems engineering. Expertise in AWS and infrastructure-as-code development. Experience with CI/CD pipelines and automation tools. Experience managing observability platforms, building monitoring dashboards, and configuring high quality, actionable alerting Strong understanding of Linux systems and networking. Familiarity with container orchestration tools like Kubernetes or Docker. Excellent analytical and problem-solving skills. Experience operating enterprise-size databases. Postgres, Aurora, Redshift, and OpenSearch experience is a plus Experience with Python or Golang is a plus Perks: Employer paid group health insurance for you and your dependents 401(k) plan with employer match (or equivalent for non US-based roles) Flexible paid time off Regular company-wide in-person events Home office stipend, and more! Corporate Values: Follow Through Deep Understanding Whatever It Takes Do Something Smart

Posted 1 month ago

Apply

2.0 years

0 Lacs

Kolkata metropolitan area, West Bengal, India

Remote

Job Title: BI (Business Intelligence) Engineer (Contract to Hire) Location: Hybrid (Remote + Onsite in Kolkata Office) Duration : 3-Month Full-Time Contract (Potential to convert to Full-Time Employee) Start Date : July, 2025 onwards About the Role: · We are looking for a talented and detail-oriented BI (Business Intelligence) Engineer to join our team on a 3-month full-time contract basis, with the potential to convert into a permanent role based on performance and business needs. · The ideal candidate will have hands-on experience in designing, building, and maintaining interactive dashboards and reports using tools like Tableau, Power BI, and Looker Studio, and possess strong skills in working with SQL and relational databases . · You will work closely with cross-functional teams to turn raw data into meaningful insights that support business decision-making. Key Responsibilities: · Design and develop interactive dashboards and reports using Tableau, Power BI, Looker Studio, or similar BI tools . · Collaborate with business and technical stakeholders to gather requirements and define KPIs. · Write optimized SQL queries to extract and manipulate data from various databases . · Ensure data accuracy, consistency, and integrity across reports and dashboards. · Analyze data trends and provide actionable insights to support business operations. · Document data models, report logic, and dashboard designs for maintainability. Required Skills & Qualifications: a. Education & Experience: · Bachelor’s or Master’s degree in Computer Science, Information Systems, Business Analytics , or a related field. · 2+ years of experience in BI/reporting roles or relevant project-based experience. b. Technical Experience: · Hands-on experience with one or more of the following BI tools: Tableau, Power BI, Looker Studio (Google Data Studio). · Strong proficiency in SQL and experience working with relational databases (e.g., MySQL, PostgreSQL, BigQuery, etc.). · Understanding of data modeling, joins, data cleansing, and transformation techniques. · Familiarity with cloud-based data warehouses (e.g., BigQuery, Snowflake, AWS Redshift) is a plus. · Knowledge of Excel, Google Sheets , and scripting (Python or R) for data manipulation is a bonus. c. Soft Skills: · Strong analytical thinking and problem-solving mindset. · Good communication skills to present data clearly to both technical and non-technical audiences. · Self-motivated, detail-oriented, and able to work independently in a hybrid environment Nice to have Skills: · Experience in data storytelling or dashboard UX/UI design best practices . · Exposure to data governance and access control in BI tools. · Basic understanding of Japanese is a plus. Work Arrangement: Hybrid : Primarily remote, with occasional onsite meetings at our Kolkata office. Must be available to work from Kolkata office when required. Contract & Future Opportunity: · Initial Engagement : 3 months contract with full-time (100%) commitment. · Future Opportunity : High potential for conversion to a full-time permanent role, depending on performance and business needs

Posted 1 month ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

Job Summary: We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications: Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 month ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

BI-Tableau desktop, Tableau Reports and Dashboard Design, Data visualization and analysis, Tableau Server, Tableau Reader, Cognos Report Studio, Query Studio, Cognos Connection is a plus Languages- SQL, PL SQL, T-SQL, SQL Plus, SAS Base a plus Perform complex MS excel operations Pivot table, filter operations on the underlying data Knowledge of reporting tools like Qlik Sense, Qlik view and statistical tools like Advanced Excel (v-lookup, charts, dashboard design), Visual Basic using visual studio, MS Access is a plus Possess ability for critical thinking, analysis, good interpersonal and communication skills. Ability to adapt and learn new technologies and get quickly proficient with them. Data mining experience Blended data from multiple resources like flat files, Excel, Oracle, and Tableau server environment Used Cloud sources like Amazon AWS Redshift, Snowflake, Google Drive, MS Excel, Oracle About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 month ago

Apply

9.0 years

0 Lacs

Andhra Pradesh

On-site

Data Engineer Must have 9+ years of experience in below mentioned skills. Must Have: Big Data Concepts Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development,Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 1 month ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices.

Posted 1 month ago

Apply

3.0 - 5.0 years

13 - 17 Lacs

Hyderabad, Chennai

Work from Office

Role & responsibilities Job Description: A detail-oriented and technically proficient Business Intelligence (BI) Engineer with strong Tableau expertise to support data analytics, dashboard development, and reporting initiatives. The ideal candidate has a solid background in SQL, data modeling, and visualization, with experience transforming raw data into actionable insights for business stakeholders Key Responsibilities • Design, build, and maintain Tableau dashboards and visualizations that communicate key business metrics. • Collaborate with business analysts, data engineers, and stakeholders to gather requirements and transform them into technical solutions. • Write and optimize SQL queries to extract, transform, and load data from various sources. • Support data quality, validation, and integrity across reports and dashboards. • Develop and maintain data models and ETL pipelines for BI use cases. • Perform ad hoc analyses and provide insights to business teams across departments (e.g., Marketing, Finance, Sales). • Assist in user training and documentation of BI solutions. • Participate in code reviews, version control, and agile sprint ceremonies (if applicable) Required Qualifications • 3-5 years of experience in BI engineering or data analytics roles. • Proficiency in Tableau (Desktop and Server) creating interactive dashboards, storyboards, and advanced charts. • Strong knowledge of SQL (PostgreSQL, MySQL, SQL Server, etc.) • Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery, etc.) • Familiarity with ETL tools (e.g., Talend, Informatica, Apache Airflow, dbt) is a plus. • Understanding of data governance and security best practices. • Ability to translate business needs into scalable BI solutions Nice to Have: • Exposure to cloud platforms like AWS, Azure, or GCP. • Knowledge of Agile/Scrum methodology. • Experience in performance tuning of dashboards and SQL queries Preferred candidate profile

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Engineering Good to have skills : Microsoft SQL Server, Python (Programming Language), Snowflake Data Warehouse Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Senior Analyst, Data Engineering, you will be part of the Data and Analytics team, responsible for developing and delivering high-quality data assets and managing data domains for Personal Banking customers and colleagues. You will bring expertise in data handling, curation, and conformity, and support the design and development of data solutions that drive business value. You will work in an agile environment to build scalable and reliable data pipelines and platforms within a complex enterprise. Roles & Responsibilities: Hands-on development experience in Data Warehousing and/or Software Development. Utilize tools and best practices to build, verify, and deploy data solutions efficiently. Perform data integration and sourcing activities across various platforms. Develop data assets to support optimized analysis for customer and regulatory outcomes. Provide ongoing support for data platforms, including problem and incident management. Collaborate in Agile software development environments using tools like GitHub, Confluence, and Rally. Support continuous improvement and innovation in data engineering practices. Professional & Technical Skills: Must To Have Skills: Experience with cloud technologies, especially AWS (S3, Redshift, Airflow). Proficiency in DevOps and DataOps tools such as Jenkins, Git, and Erwin. Advanced skills in SQL and Python. Working knowledge of UNIX, Spark, and Databricks. Additional Information: Position: Senior Analyst, Data Engineering Reports to: Manager, Data Engineering Division: Personal Bank Group: 3 Industry/Domain Skills: Experience in Retail Banking, Business Banking, or Wealth Management preferred

Posted 1 month ago

Apply

26.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Title: Senior Software Engineer Location: Mumbai, India (roles in Andheri East and / or Turbhe) Description: We’re hiring Senior Software Engineers to lead the design and delivery of robust, scalable, and production-grade data systems across a high-growth, multi-venture environment. This is a high-impact leadership role for someone who thrives in hands-on build mode, and who can scale technical delivery without losing speed, quality, or ownership. If this is interesting, we would love to hear from you! About Blenheim Chalcot Blenheim Chalcot India is part of Blenheim Chalcot a global venture builder headquartered in London. With over 26 years of innovation, we've been at the forefront of creating some of the most groundbreaking GenAI-enabled companies. Our ventures lead the charge in digital disruption across a spectrum of industries, from FinTech to EdTech, GovTech to Media, and beyond. Our global presence spans the US, Europe, and Southeast Asia, with a portfolio that employs over 3,000 individuals, manages assets exceeding £1.8 billion, and boasts total portfolio sales of over £500 million. The role We’re hiring Senior Software Engineers to build scalable products, tools, and systems across our portfolio companies. This is a high-impact role for someone who thrives in hands-on engineering, writes maintainable code, and is deeply motivated by solving real-world problems with technology. You’ll work closely with product and engineering colleagues in both Mumbai and London, driving delivery within a specific business. If you're passionate about engineering craft, product-led development, and delivering systems that scale, we’d love to hear from you. Behaviours that we look for Solve problems rigorously, not reactively - breaking them down into tractable elements and considering multiple paths to resolution. Write clean, defensive code with clear structure and minimal complexity. Think about testing and observability early, automating meaningful tests and implementing relevant metrics/logging. Design systems for operations, not just development - considering CI/CD, resilience, cost, and scalability from day one. Use data to guide decisions, validating design impact, performance, and real user outcomes. Key responsibilities Working within our Engineering Centre of Excellence in Mumbai, you can expect to: Build and maintain scalable services, data pipelines, and backend systems. Write readable, efficient code and collaborate closely in code reviews. Automate robust test coverage (unit, integration, and/or contract tests). Implement data models used by analysts and ML teams; ensure schema evolution and data integrity are maintained. Participate in system design sessions and contribute to architectural discussions. Debug production issues across multiple services and layers of the stack. Contribute to CI/CD pipelines and infrastructure-aware development practices. Opportunity This is an excellent opportunity for experienced engineers looking to step up their scope and impact. You’ll join a collaborative team with exposure to modern engineering practices, GenAI-enabled systems, and business-critical data products. You'll be part of a global network, shaping real customer outcomes. About You We are seeking to onboard candidates with a proven track record in engineering, demonstrating strong technical leadership skills and a passion for building high-quality, scalable products. Excellent teamwork, adaptability, and a strategic mindset are essential to being successful in this role. The Ideal Candidate We are looking for candidates who bring: Strong proficiency in at least one modern programming language (e.g., Python, Java). Experience with both SQL and NoSQL databases; schema design and query optimisation. Comfort with cloud platforms such as AWS or Azure (e.g., S3, Redshift, Glue, Synapse). Familiarity with CI/CD practices, automated testing, and observability. Knowledge of data structures, algorithms, and their real-world trade-offs. A proactive mindset: ready to take ownership, iterate fast, and learn continuously. Great communication and teamwork skills – particularly in cross-functional settings. Process We have a rigorous, but streamlined recruitment process, which respects the time of candidates and portfolio companies alike. This process starts with a 15-minute call with a member of our Talent Acquisition team, followed by a meeting with representatives from BC's Engineering Centre of Excellence. Please note that our roles are primarily office based, with modern and well-connected office locations in both Andheri East and Navi Mumbai.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Engineering Good to have skills : Microsoft SQL Server, Python (Programming Language), Snowflake Data Warehouse Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Senior Analyst, Data Engineering, you will be part of the Data and Analytics team, responsible for developing and delivering high-quality data assets and managing data domains for Personal Banking customers and colleagues. You will bring expertise in data handling, curation, and conformity, and support the design and development of data solutions that drive business value. You will work in an agile environment to build scalable and reliable data pipelines and platforms within a complex enterprise. Roles & Responsibilities: Hands-on development experience in Data Warehousing and/or Software Development. Utilize tools and best practices to build, verify, and deploy data solutions efficiently. Perform data integration and sourcing activities across various platforms. Develop data assets to support optimized analysis for customer and regulatory outcomes. Provide ongoing support for data platforms, including problem and incident management. Collaborate in Agile software development environments using tools like GitHub, Confluence, and Rally. Support continuous improvement and innovation in data engineering practices. Professional & Technical Skills: Must To Have Skills: Experience with cloud technologies, especially AWS (S3, Redshift, Airflow). Proficiency in DevOps and DataOps tools such as Jenkins, Git, and Erwin. Advanced skills in SQL and Python. Working knowledge of UNIX, Spark, and Databricks. Additional Information: Position: Senior Analyst, Data Engineering Reports to: Manager, Data Engineering Division: Personal Bank Group: 3 Industry/Domain Skills: Experience in Retail Banking, Business Banking, or Wealth Management preferred

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us Zelis is modernizing the healthcare financial experience in the United States (U.S.) by providing a connected platform that bridges the gaps and aligns interests across payers, providers, and healthcare consumers. This platform serves more than 750 payers, including the top 5 health plans, BCBS insurers, regional health plans, TPAs and self-insured employers, and millions of healthcare providers and consumers in the U.S. Zelis sees across the system to identify, optimize, and solve problems holistically with technology built by healthcare experts—driving real, measurable results for clients. Why We Do What We Do In the U.S., consumers, payers, and providers face significant challenges throughout the healthcare financial journey. Zelis helps streamline the process by offering solutions that improve transparency, efficiency, and communication among all parties involved. By addressing the obstacles that patients face in accessing care, navigating the intricacies of insurance claims, and the logistical challenges healthcare providers encounter with processing payments, Zelis aims to create a more seamless and effective healthcare financial system. Zelis India plays a crucial role in this mission by supporting various initiatives that enhance the healthcare financial experience. The local team contributes to the development and implementation of innovative solutions, ensuring that technology and processes are optimized for efficiency and effectiveness. Beyond operational expertise, Zelis India cultivates a collaborative work culture, leadership development, and global exposure, creating a dynamic environment for professional growth. With hybrid work flexibility, comprehensive healthcare benefits, financial wellness programs, and cultural celebrations, we foster a holistic workplace experience. Additionally, the team plays a vital role in maintaining high standards of service delivery and contributes to Zelis’ award-winning culture. Position Overview Job Title - Data Engineer – Zelis Data Cloud Location: Hyderabad, India Department Data Platform: – ZDC, ZDI Reports To: Manager, ZDC Job Summary Data Engineer – Key Responsibilities At least 5 years of experience in designing and developing Data Pipelines & Assets. Must have experience with at least one Columnar MPP Cloud data warehouse (Snowflake/Azure Synapse/Redshift) for at least 3 years. Experience in ETL tools like Azure Data factory, Fivetran / DBT for 2 years. Experience with Git and Azure DevOps. Experience in Agile, Jira, and Confluence. Solid understanding of programming SQL objects (procedures, triggers, views, functions) in SQL Server. Experience optimizing SQL queries a plus. Working Knowledge of Azure Architecture, Data Lake. Willingness to contribute to documentation (e.g., mapping, defect logs). Qualifications Bachelor's degree in Computer Science, Statistics, or a related field. Self starter & learner Able to understand and probe for requirements Generate functional specs for code migration or ask right questions thereof Hands on programmer with a thorough understand of performance tuning techniques Handling large data volume transformations (order of 100 GBs monthly) Able to create solution / data flows to suit requirements Produce timely documentation e.g., mapping, UTR, defect / KEDB logs etc. Tech experience expected Primary: Snowflake, DBT (development & testing) Secondary: Python, ETL or any data processing tool Nice to have - Domain experience in Healthcare Experience range - 4 - 6 yrs

Posted 1 month ago

Apply

5.0 years

15 - 25 Lacs

Mumbai Metropolitan Region

On-site

Data Engineer – On-Site, India Industry: Enterprise Data Analytics & Digital Transformation Consulting. We architect and operationalize large-scale data platforms that power BI, AI, and advanced reporting for global clients across finance, retail, and manufacturing. Leveraging modern cloud services and proven ETL frameworks, our teams turn raw data into trusted, analytics-ready assets that accelerate business decisions. Role & Responsibilities Design, build, and optimize end-to-end ETL pipelines that ingest, cleanse, and transform high-volume datasets using SQL and ELT best practices. Create scalable data models and dimensional schemas to support reporting, dashboarding, and machine-learning use-cases. Develop and maintain batch and near-real-time workflows in Airflow or similar orchestration tools, ensuring fault tolerance and SLA compliance. Collaborate with analysts, data scientists, and product owners to translate business requirements into performant data solutions. Implement rigorous data quality checks, lineage tracking, and metadata management to guarantee trust and auditability. Tune queries, indexes, and storage partitions for cost-efficient execution across on-premise and cloud data warehouses. Skills & Qualifications Must-Have 5+ years hands-on experience as a Data Engineer or similar. Advanced SQL proficiency for complex joins, window functions, and performance tuning. Proven expertise in building ETL/ELT pipelines with tools such as Informatica, Talend, or custom Python. Solid understanding of dimensional modeling, star/snowflake schemas, and data-vault concepts. Experience with workflow orchestration (Airflow, Luigi, or equivalent) and version control (Git). Strong grasp of data quality frameworks and error-handling strategies. Preferred Exposure to cloud platforms (AWS Redshift, Azure Synapse, or Google BigQuery). Knowledge of containerization and CI/CD pipelines for data workloads. Familiarity with streaming technologies (Kafka, Kinesis) and real-time ETL patterns. Working knowledge of BI tools (Tableau, Power BI) and their data connectivity. Benefits & Culture Highlights Work with high-calibre data practitioners and cutting-edge cloud tech. Merit-driven growth path, certification sponsorships, and continuous learning stipends. Inclusive, innovation-first culture that rewards problem-solving and ownership. Skills: kafka,data warehouse,containerization,airflow,elt,luigi,error-handling strategies,git,aws redshift,talend,star schema,power bi,informatica,data vault,ci/cd,azure synapse,etl,sql,kinesis,performance tuning,data modeling,data quality frameworks,python,dimensional modeling,snowflake schema,tableau,google bigquery

Posted 1 month ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

DXFactor is a US-based tech company working with customers across the globe. We are a Great place to work with certified company. We are looking for candidates for Data Engineer (4 to 6 Yrs exp) We have our presence in: US India (Ahmedabad, Bangalore) Location : Ahmedabad Website : www.DXFactor.com Designation: Data Engineer (Expertise in SnowFlake, AWS & Python) Key Responsibilities Design, develop, and maintain scalable data pipelines for batch and streaming workflows Implement robust ETL/ELT processes to extract data from various sources and load into data warehouses Build and optimize database schemas following best practices in normalization and indexing Create and maintain documentation for data flows, pipelines, and processes Collaborate with cross-functional teams to translate business requirements into technical solutions Monitor and troubleshoot data pipelines to ensure optimal performance Implement data quality checks and validation processes Build and maintain CI/CD workflows for data engineering projects Stay current with emerging technologies and recommend improvements to existing systems Requirements Bachelor's degree in Computer Science, Information Technology, or related field Minimum 4+ years of experience in data engineering roles Strong proficiency in Python programming and SQL query writing Hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra) Experience with data warehousing technologies (e.g., Snowflake, Redshift, BigQuery) Proven track record in building efficient and scalable data pipelines Practical knowledge of batch and streaming data processing approaches Experience implementing data validation, quality checks, and error handling mechanisms Working experience with cloud platforms, particularly AWS (S3, EMR, Glue, Lambda, Redshift) and/or Azure (Data Factory, Databricks, HDInsight) Understanding of different data architectures including data lakes, data warehouses, and data mesh Demonstrated ability to debug complex data flows and optimize underperforming pipelines Strong documentation skills and ability to communicate technical concepts effectively

Posted 1 month ago

Apply

18.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About the Role This is a senior leadership position within Business Information Management Practice. The individual is responsible for the overall vision, strategy, delivery and operations of key accounts in BIM. This requires working closely with global executive team, subject matter experts, solution architects, project managers and client teams to conceptualize, build and operate Big Data Solutions. Communicate with internal management, client sponsors and senior leaders on the project status, risks, solution, etc. Responsibilities Client Delivery Leadership Role Candidate to be responsible for delivering at least $10 M + revenue using information management solution(s): Big Data, Data Warehouse, Data Lake, GEN AI, Master Data Management System, Business Intelligence & Reporting solutions, IT Architecture Consulting, Cloud Platforms (AWS/AZURE), SaaS/PaaS based solutions Practice and Team Leadership Role: Self-Driven for results - Able to take initiative and set priorities; pursue tasks tenaciously & with a need to finish. Able to overcome setbacks which may occur along the way. Customer Focus - Dedicated to meeting the expectations of internal and external clients. Problem Solving - Uses rigorous logic and methods to solve difficult problems with effective solutions. Probes all fruitful sources for answers. Is excellent at honest analysis. Looks beyond the obvious and doesn’t stop at the first answers. Learning on the Fly - Learns quickly when facing new problems. A relentless and versatile learner. Proven ability to handle multiple projects/programs while meeting deadlines and documenting progress towards those deadlines. Excellent communication skills (must be able to interface with both technical and business leaders in the organization). Leadership Skills to Coach, mentor and develop senior and middle level staff. Develop the manager layers to be leaders of the future. Be known as a Thought Leader in a specific aspect of Information Management technology spectrum or Pharma domain. Direct the training & skill enhancement of the team, in line with pipeline opportunities. Ability to lead large RFP responses, design and implement the solution for proposals and customer decks. Assist in generating order pipeline, road shows, develop Go-to-market strategy for regions & verticals. Create market facing collaterals as per requirements. Able to write white paper, blogs, technical/functional point of view. Qualifications MBA in Business Management Bachelor of Computer Science Required Skills Candidate should have 18+ years of prior experience (preferably including at least 5 yrs in Pharma Commercial domain) in delivering customer focused information management solution(s): Big Data, Data Warehouse, Data Lake, Master Data Management System, Business Intelligence & Reporting solutions, IT Architecture Consulting, Cloud Platforms (AWS/AZURE), SaaS/PaaS based solutions. Should have successfully done 4-5 end to end DW implementations using technologies: Big Data, Data Management and BI technologies such as Redshift, Hadoop, ETL tools like Informatica/Matillion/Talend, BI tools like Qlik/MSTR/Tableau, Dataiku/Knime and Cloud Offerings from AWS/Azure. Ability to lead large RFP responses, design and implement the solution for proposals and customer decks. Should have led large teams of at least 100+ resources. Good communication, client facing and leadership skills. Hands on knowledge of databases, SQL, reporting solutions like BI tools or Excel/VBA. Preferred Skills Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Capability Building / Thought Leadership About the Company Axtria is a global provider of cloud software and data analytics to the Life Sciences industry. We help Life Sciences companies transform the product commercialization journey to drive sales growth and improve healthcare outcomes for patients. We are acutely aware that our work impacts millions of patients and lead passionately to improve their lives. Since our founding in 2010, technology innovation has been our winning differentiation, and we continue to leapfrog competition with platforms that deploy Artificial Intelligence and Machine Learning. Our cloud-based platforms - Axtria DataMax™, Axtria InsightsMax™, Axtria SalesIQ™, and Axtria MarketingIQ™ - enable customers to efficiently manage data, leverage data science to deliver insights for sales and marketing planning and manage end-to-end commercial operations. With customers in over 75 countries, Axtria is one of the largest global commercial solutions providers in the Life Sciences industry. We continue to win industry recognition for growth and are featured in some of the most aspirational lists - INC 5000, Deloitte FAST 500, NJBiz FAST 50, SmartCEO Future 50, Red Herring 100, and several other growth and technology awards. Axtria is looking for exceptional talent to join our rapidly growing global team. People are our biggest perk! Our transparent and collaborative culture offers a chance to work with some of the brightest minds in the industry. Axtria Institute, our in-house university, offers the best training in the industry and an opportunity to learn in a structured environment. A customized career progression plan ensures every associate is setup for success and able to do meaningful work in a fun environment. We want our legacy to be the leaders we produce for the industry. Will you be next?

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Overview: Blue Spire is hiring across multiple levels —from Database Engineers to Engineering Managers —to support our client's mission-critical Analytics Platforms . The team will be a part of centralized database engineers who are responsible for the maintenance and support of our client's most critical databases. This is a high-impact opportunity within a new team which drives technical excellence, and partners closely with global business and technology teams. Responsibilities Requires conceptual knowledge of database practices and procedures such as DDL, DML and DCL. Knowledge/ Experience with database management/administration, Redshift, Snowflake or Neo4J Requires how to use basic SQL skills including SELECT, FROM, WHERE and ORDER BY. Ability to code SQL Joins, subqueries, aggregate functions (AVG, SUM, COUNT), and use data manipulation techniques (UPDATE, DELETE). Understanding basic data relationships and schemas. Develop Basic Entity-Relationship diagrams. Conceptual understanding of cloud computing Can solves routine problems using existing procedures and standard practices. Can look up error codes and open tickets with vendors Ability to execute explains and identify poorly written queries Review data structures to ensure they adhere to database design best practices. Understanding the different cloud models (IaaS, PaaS, SaaS), service models, and deployment options (public, private, hybrid). Troubleshoot database issues, such as integrity issues, blocking/deadlocking issues, log shipping issues, connectivity issues, security issues, memory issues, disk space, etc. Understanding cloud security concepts, including data protection, access control, and compliance. Manages risks that are associated with the use of information technology. Identifies, assesses, and treats risks that might affect the confidentiality, integrity, and availability of the organization's assets. Ability to design and implement highly performing database using partitioning & indexing that meet or exceed the business requirements. Documents a complex software system design as an easily understood diagram, using text and symbols to represent the way data needs to flow. Ability to code complex SQL. Performs effective backup management and periodic databases restoration testing. General DB Cloud networking skills – VPCs, SGs, KMS keys, private links. Ability to develop stored procedures and at least one scripting language for reusable code and improved performance. Know how to import and export data into and out of databases using ETL tools, code, migration tools like DMS or scripts Knowledge of DevOps principles and tools, such as CI/CD. Attention to detail and demonstrate a customer centric approach. Solves complex problems by taking a new perspective on existing solutions; exercises judgment based on the analysis of multiple sources of information Ability to optimize queries for performance and resource efficiency Review database metrics to identify performance issues. Required Qualifications Experience with database management/administration, Redshift, Snowflake or Neo4J Working with incident, change and problem management processes and procedures. Experience maintaining and supporting large-scale critical database systems in the cloud. Experience working with AWS cloud hosted databases An understanding of one programming languages, including at least one front end framework (Angular/React/Vue), such as Python3, Java, JavaScript, Ruby, Golang, C, C++, etc. Experience with cloud computing, ETL and streaming technologies – OpenShift, DataStage, Kafka Experience with agile development methodology Strong SQL performance & tuning skills Excellent communication and client interfacing skills Experience working in the banking industry Experience working in an agile development environment Experience working in cloud environments such as AWS, Azure or Google Experience with CI/CD pipeline (Jenkins, Liquibase or equivalent)

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies