Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
Experian is a global data and technology company that powers opportunities for individuals and businesses worldwide. Our unique combination of data, analytics, and software allows us to redefine lending practices, prevent fraud, simplify healthcare, create marketing solutions, and provide deeper insights into the automotive market. With a presence in various markets such as financial services, healthcare, automotive, agribusiness, and insurance, we assist millions of individuals in achieving their financial goals while saving them time and money. As part of our commitment to unlocking the power of data, we are looking for a skilled professional to join our team as a Business Intelligence Developer. In this role, you will be responsible for developing client-facing standardized reports using Business Intelligence tools like Sigma and Tableau. You will conduct data profiling on source data with minimal documentation, troubleshoot data independently, perform detailed data analyses, and develop complex SQL code. Additionally, you will write secure, stable, testable, and maintainable Python code with minimal defects. Your key responsibilities will include performing root cause analysis, proposing solutions, and taking ownership of the next steps for issue resolution. You will create and maintain report specifications and process documentations as part of the required data deliverables. Collaborating with Engineering teams, you will discover and leverage data introduced into the environment and serve as a liaison between business and technical teams to achieve project objectives by delivering cross-functional reporting solutions. To be successful in this role, you must have a minimum of 7 years of experience in BI visualization development and support, along with 2 years of experience in Sigma report development and support, 2 years of experience in Tableau Server administration, 3 years of experience with the AWS data ecosystem (Redshift, S3, etc.), and 2 years of experience with Python. Experience in an Agile environment and familiarity with MWAA and Business Objects are considered advantageous. Moreover, you should possess excellent customer-facing communication skills, be a highly motivated self-starter, detail-oriented, and able to work independently to formulate innovative solutions. If you are someone who can multitask and prioritize an evolving workload in a fast-paced environment and provide on-call production support, we encourage you to apply. A BS degree or higher in MIS or engineering fields is required for this position. Join Experian, where we celebrate uniqueness and prioritize our people. Our culture and focus on DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, and volunteering have earned us accolades such as World's Best Workplaces 2024, Great Place To Work in 24 countries, and Glassdoor Best Places to Work 2024. Find out more about Experian Life on social or our Careers Site to understand why we are dedicated to creating a better tomorrow together.,
Posted 4 weeks ago
8.0 - 13.0 years
6 - 11 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 2 months ago
8.0 - 13.0 years
6 - 11 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 2 months ago
8.0 - 13.0 years
6 - 11 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 2 months ago
7.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Develop client-facing standardized reports with Business Intelligence tools such as Tableau. Perform data profiling on source data with minimal documentation. Independently troubleshoot data, perform detail data analyses, and develop complex SQL code. Write secure, stable, testable, and maintainable Python code with minimal defects Perform root cause analysis, propose solutions, and take ownership of the next steps for their resolution Create and maintain report specifications and process documentations as part of the required data deliverables. Writes queries to pull/summarize/analyze data from various data sets and platforms Collaborate with Engineering teams to discover and leverage data being introduced into the environment Serve as liaison with business and technical teams to achieve project objectives, delivering cross-functional reporting solutions. Ability to multitask and prioritize an evolving workload in a fast-paced environment. Provide on-call production support Qualifications Minimum 7 years of experience in Tableau development and support 2 years of experience in Business Objects report development and support 2 years of experience in Tableau Server administration 3 years of experience with AWS data ecosystem (Redshift, S3, etc.) 2 years of experience with Python 3 years of experience in an Agile environment Experience with MWAA and Sigma is a plus Excellent customer-facing communication skills between business partners and technical teams Highly motivated self-starter, detail-oriented, and able to work independently to formulate innovative solutions Education: BS degree or higher in MIS or engineering fields Qualifications Experience with MWAA and Sigma Excellent customer-facing communication skills between business partners and technical teams Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning World's Best Workplaces 2024 (Fortune Top 25), Great Place To Work in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together
Posted 2 months ago
3.0 - 5.0 years
5 - 7 Lacs
Hyderabad, Bengaluru
Work from Office
About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotaks Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotaks data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects
Posted 2 months ago
5.0 - 10.0 years
12 - 22 Lacs
Pune, Chennai, Bengaluru
Work from Office
Job Summary: AWS Developer (Offshore Role) Role Overview: Responsible for migrating and transforming data pipelines from legacy Cloudera/Hadoop systems to AWS-native solutions, using tools like PySpark, MWAA, and EMR. Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus. Mandatory: AWS Developer experience.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |