Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 8 years
22 - 25 Lacs
Noida
Work from Office
We at Innovaccer are looking for a Software Development Engineer-II (Backend) to build the most amazing product experience. You ll get to work with other engineers to build delightful feature experiences to understand and solve our customer s pain points. A Day in the Life Building efficient and reusable applications and abstractions. Identify and communicate back-end best practices. Participate in the project life-cycle from pitch/prototyping through definition and design to build, integration, QA and delivery Analyze and improve the performance, scalability, stability, and security of the product Improve engineering standards, tooling, and processes What You Need 3+ years of experience with a start-up mentality and high willingness to learn Expert in Python and experience with any web framework (Django, FastAPI, Flask etc) Aggressive problem diagnosis and creative problem-solving skill Expert in Kubernetes and containerization Experience in RDBMS & NoSQL database such as Postgres, MongoDB, (any OLAP database is good to have) Experience in Solution Architecture Experience in cloud service providers such as AWS or Azure. Experience in Kafka, RabbitMQ, or other queuing services is good to have. Working experience in BigData / Distributed Systems and Async Programming Bachelors degree in Computer Science/Software Engineering. Preferred Skills Expert in Python and any web framework. Experience in working with Kubernetes and any cloud provider(s). Any SQL or NoSQL database. Working experience in Distributed systems. Here s What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industrys best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? Weve got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when youre away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your childs well-being first. *India offices
Posted 3 months ago
10 - 15 years
50 - 55 Lacs
Bengaluru
Work from Office
The candidate should have proven expertise in building scalable platforms that are customer facing and have expertise in evangelizing the platform with customers and with internal stakeholders Expert level knowledge of Cloud Computing including aspects of VPC Network Design, Shared Responsibility Matrix, Cloud databases, No SQL Databases, Data Pipelines on the cloud, VM and VM orchestration, Serverless frameworks This should be across all 3 major cloud providers (AWS, Azure, GCP), preferably at least in 2 of the 3 Public Clouds Expert level knowledge in Data Ingestion paradigms & use of different types of databases like OLTP, OLAP for specific purposes Hands-on experience with Apache Spark, Apache Flink, Kafka, Kinesis, Pub/Sub, Databricks, Apache Airflow, Apache Iceberg, and Presto Expertise in designing ML Pipelines for experiment management, model management, feature management, model retraining, A/B testing of models and design of APIs for model inferencing at scale Proven expertise with Kube Flow, SageMaker/Vertex AI/Azure AI SME in LLM Serving paradigms, deep knowledge of GPU architectures, distributed training and serving of large language models Expertise in Model and Data parallel training, expertise with frameworks like DeepSpeed and service frameworks like vLLM etc Proven expertise in Model finetuning and model optimization techniques to achieve better latencies, better accuracies in results Be an expert in reducing training and resource requirements of finetuning of LLM and LVM models Have a wide knowledge of different LLM models and have an opinion on aspects of applicability of each model based the usecases Proven expertise of having worked on specific customer usecases and having seen delivery of a solution end to end from engineering to production Proven expertise in DevOps and LLMOps, knowledge of Kubernetes, Docker and container orchestration, and deep knowledge of LLM Orchestration frameworks like Flowise, Langflow, Langgraph Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Dev Ops: Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Expertise: AWS/Azure/GCP Cloud Certifications: AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript
Posted 3 months ago
4 - 7 years
10 - 15 Lacs
Bengaluru
Work from Office
Responsibilities Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Graduate with a minimum of 6+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions. ABOUT BUSINESS UNIT IBM Consulting is IBMs consulting and global professional services business, with market leading capabilities in business and technology transformation. With deep expertise in many industries, we offer strategy, experience, technology, and operations services to many of the most innovative and valuable companies in the world. Our people are focused on accelerating our clients businesses through the power of collaboration. We believe in the power of technology responsibly used to help people, partners and the planet. YOUR LIFE @ IBM
Posted 3 months ago
4 - 9 years
19 - 23 Lacs
Pune
Work from Office
Job Summary As a Solutions Architect, the candidate will be responsible for understanding requirements and building solution architectures for the Data Engineering and Advanced Analytics Capability. The role will require a mix of technical knowledge and finance domain functional knowledge while the functional knowledge is not necessarily a must have. The candidate will apply best practices to create data architectures that are secure, scalable, cost-effective, efficient, reusable, and resilient. The candidate will participate in technical discussions, present their architectures to stakeholders for feedback, and incorporate their input. The candidate will evaluate, recommend, and integrate SaaS applications to meet business needs, and provide architectures for integrating existing Eaton applications or developing new ones with a cloud-first mindset. The candidate will offer design oversight and guidance during project execution, ensuring solutions align with strategic business and IT goals. As a hands-on technical leader, the candidate will also drive Snowflake architecture. The candidate will collaborate with both technical teams and business stakeholders, providing insights on best practices and guiding data-driven decision-making. This role demands expertise in Snowflake s advanced features and cloud platforms, along with a passion for mentoring junior engineers. Job Responsibilities Collaborate with data engineers, system architects, and product owners to implement and support Eatons data mesh strategy, ensuring scalability, supportability, and reusability of data products. Lead the design and development of data products and solutions that meet business needs and align with the overall data strategy, creating complex enterprise datasets adhering to technology and data protection standards. Deliver strategic infrastructure and data pipelines for optimal data extraction, transformation, and loading, documenting solutions with architecture diagrams, dataflows, code comments, data lineage, entity relationship diagrams, and metadata. Design, engineer, and orchestrate scalable, supportable, and reusable datasets, managing non-functional requirements, technical specifications, and compliance. Assess technical capabilities across Value Streams to select and align technical solutions following enterprise guardrails, executing proof of concepts (POCs) where applicable. Oversee enterprise solutions for various data technology patterns and platforms, collaborating with senior business stakeholders, functional analysts, and data scientists to deliver robust data solutions aligned with quality measures. Support continuous integration and continuous delivery, maintaining architectural runways for products within a Value Chain, and implement data governance frameworks and tools to ensure data quality, privacy, and compliance. Develop and support advanced data solutions and tools, leveraging advanced data visualization tools like Power BI to enhance data insights, and manage data sourcing and consumption integration patterns from Eatons data platform, Snowflake. Accountable for end-to-end delivery of source data acquisition, complex transformation and orchestration pipelines, and front-end visualization. Strong communication and presentation skills, leading collaboration with business stakeholders to deliver rapid, incremental business value/outcomes. Lead and participate in the planning, definition, development, and high-level design of solutions and architectural alternatives. Participate in solution planning, incremental planning, product demos, and inspect and adapt events. Plan and develop the architectural runway for products that support desired business outcomes. Provide technical oversight and encourage security, quality, and automation. Support the team with a techno-functional approach as needed. Qualifications: BE in Computer Science, Electrical, Electronics/ Any other equivalent Degree Education level required: 10 years Experience or knowledge of Snowflake, including administration/architecture. Expertise in complex SQL, Python scripting, and performance tuning. Understanding of Snowflake data engineering practices and dimensional modeling for performance and scalability. Experience with data security, access controls, and setting up security frameworks and governance (e. g. , SOX). Technical Skills Advanced SQL skills for building queries and resource monitors in Snowflake. Proficiency in automating Snowflake admin tasks and handling concepts like RBAC controls, virtual warehouses, resource monitors, SQL performance tuning, zero-copy clone, and time travel. Experience in re-clustering data in Snowflake and understanding micro-partitions. Excellent analysis, documentation, communication, presentation, and interpersonal skills. Ability to work under pressure, meet deadlines, and manage, mentor, and coach a team of analysts. Strong analytical skills for complex problem-solving and understanding business problems. Experience in data engineering, data visualization, and creating interactive analytics solutions using Power BI and Python. Extensive experience with cloud platforms like Azure and cloud-based data storage and processing technologies. Expertise in dimensional and transactional data modeling using OLTP, OLAP, NoSQL, and Big Data technologies. Familiarity with data frameworks and storage platforms like Cloudera, Databricks, Dataiku, Snowflake, dbt, Coalesce, and data mesh. Experience developing and supporting data pipelines, including code, orchestration, quality, and observability. Expert-level programming ability in multiple data manipulation languages (Python, Spark, SQL, PL-SQL). Intermediate experience with DevOps, CI/CD principles, and tools, including Azure Data Factory. Experience with data governance frameworks and tools to ensure data quality, privacy, and compliance. Solid understanding of cybersecurity concepts such as encryption, hashing, and certificates. Strong analytical skills to evaluate data, reconcile conflicts, and abstract information. Continual learning of new modules, ETL tools, and programming techniques. Awareness of new technologies relevant to the environment. Established as a key data leader at the enterprise level.
Posted 3 months ago
4 - 5 years
10 - 12 Lacs
Chennai, Pune, Delhi
Work from Office
The MySQL HeatWave and Advanced Development team is responsible for the massively parallel, high performance, in-memory query accelerator for Oracle MySQL Database Service that accelerates MySQL performance by orders of magnitude for analytics and mixed workloads. HeatWave is 6.5X faster than Amazon Redshift at half the cost, 7X faster than Snowflake at one-fifth the cost, and 1400X faster than Amazon Aurora at half the cost. MySQL Database Service with HeatWave is the only service that enables customers to run OLTP and OLAP workloads directly from their MySQL database. This eliminates the need for complex, time-consuming, and expensive data movement and integration with a separate analytics database. The new MySQL Autopilot uses advanced machine-learning techniques to automate HeatWave, which makes it easier to use and further improves performance and scalability. This cutting-edge technology serves critical business needs, which is changing the way data transactions function, all over the world. You will make a technical impact on the world with the work you do. Join us to help further develop this amazing technology. In our flexible workplace, youll enhance your skills and build a solid professional foundation. As a software developer for Oracles MySQL Heatwave team, you will contribute to an exciting team. You will use your skills and experience to directly improve the experience for Oracles customers. You will design, implement, and deliver complex features in an independent manner. The role will provide you with great chance to work in a team developing a complex distributed system using a serverless architecture. The ideal candidate has many of the skills, but the key is the motivation and ability to learn quickly as well as a passion for an excellent customer experience. Engineers Will - Be passionate about writing excellent, well tested and beautiful code. - Design, write, test, and deliver new features. - Enjoy testing and automation to ensure a rock-solid system. Desired Skills - Proficient in Typescript / Python / HTML / JavaScript and CSS. - Familiarity with AWS services (e.g.: Lambda service, Step Functions, DynamoDB, AWS Session manager, CloudWatch, etc.). - A good understanding of single-page web app design. - Ability to work independently and across teams to guide other engineers through technical operations. - Good technical writing and communication skills. - Linux systems administration knowledge including a good understanding of containers. - A good understanding of operating large-scale distributed systems. - Slack skills and being comfortable coordinating with others online. - Very strong analytical skills to identify problem root causes. - An interest in functional programming styles. - BTech minimum in Computer Science, or equivalent. - 4+ years of work experience as a software engineer. As a member of the software engineering division, you will assist in defining and developing software for tasks associated with the developing, debugging or designing of software applications or operating systems. Provide technical leadership to other software developers. Specify, design and implement modest changes to existing software architecture to meet changing needs.
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : Apache Kafka, Apache Airflow Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support business needs and enable data-driven decision-making. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Design and develop scalable and efficient data pipelines. Ensure data quality and integrity throughout the data lifecycle. Implement ETL processes to migrate and deploy data across systems. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Optimize and maintain data infrastructure to support business needs. Stay up-to-date with industry trends and best practices in data engineering. Additional Responsibility 1:Collaborate with data scientists and analysts to understand their data needs and provide the necessary infrastructure and tools. Additional Responsibility 2:Troubleshoot and resolve data-related issues in a timely manner. Professional & Technical Skills: Must To Have Skills:Proficiency in Apache Spark, Java, Google Dataproc. Good To Have Skills:Experience with Apache Airflow. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Strong experience with multiple database models ( SQL, NoSQL, OLTP and OLAP) Strong experience with Data Streaming Architecture ( Kafka, Spark, Airflow) Strong knowledge of cloud data platforms and technologies such as GCS, BigQuery, Cloud Composer, Dataproc and other cloud-native offerings Knowledge of Infrastructure as Code (IaC) and associated tools (Terraform, ansible etc) Experience pulling data from a variety of data source types including Mainframe EBCDIC), Fixed Length and delimited files, databases (SQL, NoSQL, Time-series) Experience performing analysis with large datasets in a cloud-based environment, preferably with an understanding of Google's Cloud Platform (GCP) Comfortable communicating with various stakeholders (technical and non-technical) GCP Data Engineer Certification is a nice to have Additional Information: The candidate should have a minimum of 3 years of experience in Apache Spark. This position is based in Bengaluru. A 15 years full-time education is required. Qualifications 15 years full time education
Posted 3 months ago
5 years
0 Lacs
Chhattisgarh, India
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platformModularize complex ML code into standardized and repeatable componentsEstablish and facilitate adoption of repeatable patterns for model development, deployment, and monitoringBuild a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelinesLeverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelinesReview code changes from data scientists and champion software development best practicesLeverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience.3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc.Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable.Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.)Production experience with infrastructure-as-code tools such as Terraform, FluxCDExpert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike toolsExpert level experience with CI/CD frameworks such as GitHub ActionsExpert level experience with containerization frameworksStrong analytical and problem solving skills, capable of working in a dynamic environmentExceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable GoIcebergPinot or other time-series/OLAP-style databaseJenkinsParquetProtocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work cultureMarket leader in compensation and equity awardsComprehensive physical and mental wellness programsCompetitive vacation and holidays for rechargePaid parental and adoption leavesProfessional development opportunities for all employees regardless of level or roleEmployee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connectionsVibrant office culture with world class amenitiesGreat Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance.
Posted 5 months ago
4.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Information Date Opened 02/12/2023 Job Type Permanent RSD NO 6488 Industry Technology Min Experience 4 Years Max Experience 6 Years City Chennai City Corporation State/Province Tamil Nadu Country India Zip/Postal Code 600020 Job Description Snowflake Developer Experience in Data migration and data warehousing projects including initial loads Hands-on experience working in migration and build for Snowflake environment Well versed with different concepts of snowflake layers, and database objects Exposure to SnowSQL, Snowpipe, Role based access controls, ETL / ELT tools Build Non-functional requirements for Snowflake (CI/CD, monitoring, orchestration, data quality, monitoring) Well versed with Snowflake query profiles, and snowflake information schema tables Strong SQL, Hands on Experience in writing complex SQL queries Good to have Skills & Experience Required :- PL/SQL Skills Data modeller 4 to 6+ years of exp in in designing STAR / Snowflake schema Exp in designing - conceptual / logical / physical data models (both OLTP and OLAP) Should possess strong SQL, Code / Data Analysis skills Experience in working on heterogeneous Data Migration projects Should have exp in Snowflake DWH Should be familiar with Azure/AWS, Databricks, Python, Pyspark Tools - Erwin / IBM infosphere data modeler etc At Indium diversity, equity, and inclusion (DEI) are the cornerstones of our values. We champion DEI through a dedicated council, expert sessions, and tailored training programs, ensuring an inclusive workplace for all. Our initiatives, including the WE@IN women empowerment program and our DEI calendar, foster a culture of respect and belonging. Recognized with the Human Capital Award, we are committed to creating an environment where every individual thrives. Join us in building a workplace that values diversity and drives innovation.
Posted 2 years ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
With the increasing demand for data analysis and business intelligence, OLAP (Online Analytical Processing) jobs have become popular in India. OLAP professionals are responsible for designing, building, and maintaining OLAP databases to support data analysis and reporting activities for organizations. If you are looking to pursue a career in OLAP in India, here is a comprehensive guide to help you navigate the job market.
These cities are known for having a high concentration of IT companies and organizations that require OLAP professionals.
The average salary range for OLAP professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 12 lakhs per annum.
Career progression in OLAP typically follows a trajectory from Junior Developer to Senior Developer, and then to a Tech Lead role. As professionals gain experience and expertise in OLAP technologies, they may also explore roles such as Data Analyst, Business Intelligence Developer, or Database Administrator.
In addition to OLAP expertise, professionals in this field are often expected to have knowledge of SQL, data modeling, ETL (Extract, Transform, Load) processes, data warehousing concepts, and data visualization tools such as Tableau or Power BI.
As you prepare for OLAP job interviews in India, make sure to hone your technical skills, brush up on industry trends, and showcase your problem-solving abilities. With the right preparation and confidence, you can successfully land a rewarding career in OLAP in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2